Jump to content

proximagr

Moderators
  • Posts

    2468
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by proximagr

  1. Get early access to large disks support of Azure Backup & more Azure Backup’s 1TB limitation at last is over! Now you can backup VMs with disk sizes up to 4TB(4095GB), both managed and unmanaged. Also has improvements on backup and recovery performance that you can find here. Starting today login to the Portal, go to your Recovery Services vault and you will a notification saying “Support for >1TB disk VMs and improvements to backup and restore speed ->” Click the notification and the “Upgrade to new VM Backup stack” will open. Here click “Upgrade” to complete the upgrade. You can also upgrade all the Recovery Services vaults of a subscription using Azure PowerShell 1. Select the subscription: 1 Get-AzureRmSubscription –SubscriptionName "SubscriptionName" | Select-AzureRmSubscription 2. Register this subscription for the upgrade: 1 Register-AzureRmProviderFeature -FeatureName "InstantBackupandRecovery" –ProviderNamespace Microsoft.RecoveryServic
  2. Azure App Service, get data from on-premises databases securely There are many scenarios where we want to have the Web Application on the Cloud but on the other hand, due to various limitations, the database stays on-premises. Azure has a service, called Azure Hybrid Connections, that allows the Web App to connect to on-premises databases, using internal IP address or the database server host name, without a complex VPN setup. The Connection diagram I have tested the connection with Microsoft SQL, PostgreSQL, MySQL, mongodb and Oracle. The databse requirements is to have a static port. So the first step in case of a Microsoft SQL instance is to assign a static port. In my test environment I have a Microsoft SQL 2016 and I assigned the default port 1433, using the Sql Server Configuration Manager / SQL Server Network Configuration / Protocols for INSTANCENAME (MSSQLSERVER) All paid service plans supports hybrid connections. The limits are on how many hybrid connections can be used per plan, as the below table shows.Pricing planNumber of Hybrid Connections usable in the planBasic5Standard25Premium200Isolated200 To start creating the Hybrid Connections, go to the App Service / Networking / Hybrid Connections and press the “Configure your hybrid connection endpoints” At the Hybrid connections blade there are two steps, the first is to “Add hybrid connection” and the second is to “Download the connection manager”. First click the “Add hybrid connection” and then press “Create new hybrid connection” The “Create new hybrid connection” blade will open. Add a Hybrid connection name, this must be at least 6 characters and it is the display name of the connection. At the Endpoint host add the hostname of the database server and at the Endpoint port, the port of the database. At my case I added 1433, as this is the port I assign to my SQL instance before. Finally you will need to specify a name for a Servicebus namespace. As you realize, the hybrid connection uses Azure Servicebus for the communication, and press OK. Once the connection is created it will be shown at the portal as “Not connected” Now we need to download and install the hybrid connection manager by clicking the “Download connection manager”. For this test I will install the hybrid connection manager at the same server as the SQL database, but for a production environment it is recommended to install the hybrid connection manager to a different server that will have access to the database servers only to the required ports. For the best security install it to a DMZ server and open only the required ports to the database servers. Run the downloaded msi and just click Install. Open the “Hybrid connection manager” UI and press “Add a new Hybrid Connection. Sign in to your Azure account Once logged in, choose your Subscription and the hybrid connection configured previously will appear. Select it and press Save. Now at the connection manager status it will show “Connnected” The same at the Azure Portal and your Hybrid connection is ready. Test, test, test and proof of concept. Open the Console, form the Wep App Blade, and tcpping the SQL server’s hostname atthe port 1433 and also sqlcmd [/url] The post Azure App Service, get data from on-premises databases securely appeared first on Apostolidis IT Corner. Source
  3. Azure App Service, get data from on-premises databases securely There are many scenarios where we want to have the Web Application on the Cloud but on the other hand, due to various limitations, the database stays on-premises. Azure has a service, called Azure Hybrid Connections, that allows the Web App to connect to on-premises databases, using internal IP address or the database server host name, without a complex VPN setup. The Connection diagram I have tested the connection with Microsoft SQL, PostgreSQL, MySQL, mongodb and Oracle. The databse requirements is to have a static port. So the first step in case of a Microsoft SQL instance is to assign a static port. In my test environment I have a Microsoft SQL 2016 and I assigned the default port 1433, using the Sql Server Configuration Manager / SQL Server Network Configuration / Protocols for INSTANCENAME (MSSQLSERVER) All paid service plans supports hybrid connections. The limits are on how many hybrid connections can be used per plan, as the below table shows. Pricing plan Number of Hybrid Connections usable in the plan Basic 5 Standard 25 Premium 200 Isolated 200 To start creating the Hybrid Connections, go to the App Service / Networking / Hybrid Connections and press the “Configure your hybrid connection endpoints” At the Hybrid connections blade there are two steps, the first is to “Add hybrid connection” and the second is to “Download the connection manager”. First click the “Add hybrid connection” and then press “Create new hybrid connection” The “Create new hybrid connection” blade will open. Add a Hybrid connection name, this must be at least 6 characters and it is the display name of the connection. At the Endpoint host add the hostname of the database server and at the Endpoint port, the port of the database. At my case I added 1433, as this is the port I assign to my SQL instance before. Finally you will need to specify a name for a Servicebus namespace. As you realize, the hybrid connection uses Azure Servicebus for the communication, and press OK. Once the connection is created it will be shown at the portal as “Not connected” Now we need to download and install the hybrid connection manager by clicking the “Download connection manager”. For this test I will install the hybrid connection manager at the same server as the SQL database, but for a production environment it is recommended to install the hybrid connection manager to a different server that will have access to the database servers only to the required ports. For the best security install it to a DMZ server and open only the required ports to the database servers. Run the downloaded msi and just click Install. Open the “Hybrid connection manager” UI and press “Add a new Hybrid Connection. Sign in to your Azure account Once logged in, choose your Subscription and the hybrid connection configured previously will appear. Select it and press Save. Now at the connection manager status it will show “Connnected” The same at the Azure Portal and your Hybrid connection is ready. Test, test, test and proof of concept. Open the Console, form the Wep App Blade, and tcpping the SQL server’s hostname atthe port 1433 and also sqlcmd
  4. Για πολλούς, ένα πρόβλημα στο να χρησιμοποιήσουν την Azure SQL, είναι η δημόσια πρόσβαση. Μετά τα τελευταία Azure updates μπορούμε να χρησιμοποιήσουμε τα service endpoints ώστε να ασφαλίσουμε την Azure SQL μέσα σε ένα VNET. Ας ξεκινήσουμε λοιπόν να βάλουμε την Azure SQL μέσα σε ένα VNET. Ανοίγουμε το Azure Portal και ξεκινάμε να δημιουργήσουμε ένα VNET. Στο τέλος της σελίδας δημιουργίας έχει προστεθεί μια νέα επιλογή που λέγετε service endpoints. Το ενεργοποιούμε και επιλέγουμε το Microsoft.Sql. Στη συνέχεια δημιουργούμε μια SQL Database. Πάλι από το Azure Portal επιλέγουμε New –> SQL Database και βάζουμε ότι στοιχεία θέλουμε. Αφού δημιουργηθεί η SQL Database, ανοίγουμε τις ρυθμίσεις και πηγαίνουμε στο Firewall / Virtual Networks. Εκεί απενεργοποιούμε το «Allow access to Azure Services». Με αυτήν την επιλογή κόβουμε την πρόσβαση στην SQL από την Public IP. Για να συνδέσουμε την SQL στο VNET πατάμε το «+Add existing virtual network» και δημιουργούμε έναν κανόνα όπου επιλέγουμε το VNET που δημιουργήσαμε με ενεργοποιημένα τα service endpoints. Η ώρα της δοκιμής. Ένας γρήγορος τρόπος να δοκιμάσουμε την συνδεσιμότητα μιας SQL είναι το «ODBC Data Source Administrator» το οποίο βρίσκετε στα Administrative Tools σε όλα τα λειτουργικά MS Windows Server & Professional clients. Αν προσπαθήσετε να συνδεθείτε over internet θα δείτε ότι η σύνδεση κόβετε σε επίπεδο TCP, δεν ανοίγει καν η σύνδεση, σαν να μην υπάρχει. Έφτιαξα λοιπόν ένα VM μέσα στο VNET για να έχω τοπική πρόσβαση. Ανοίγουμε το ODBC Data Source Administrator, και στα User DSN πατάμε new connection. Για όνομα δίνουμε ότι θέλουμε, δεν έχει σημασία και στο server δίνουμε το FQDN του Azure SQL Database. Στην επόμενη εικόνα δίνουμε username και password του Azure SQL Database και πατάμε «Test Data Source» Επίσης μπορούμε να συνδεθούμε με SMSS, βάζοντας το SQL Server FQDN, το username και το password και συνδέεται γρήγορα και με ασφάλεια! [/url] The post Ασφάλισε την Azure SQL Database μέσα σε ένα VNET χρησιμοποιώντας service endpoints appeared first on Apostolidis IT Corner. Source
  5. ΑΣΦΆΛΙΣΕ ΤΗΝ AZURE SQL DATABASE ΜΈΣΑ ΣΕ ΈΝΑ VNET ΧΡΗΣΙΜΟΠΟΙΏΝΤΑΣ SERVICE ENDPOINTS February 6, 2018 Pantelis Apostolidis Microsoft, Ελληνικά Leave a comment Για πολλούς, ένα πρόβλημα στο να χρησιμοποιήσουν την Azure SQL, είναι η δημόσια πρόσβαση. Μετά τα τελευταία Azure updates μπορούμε να χρησιμοποιήσουμε τα service endpoints ώστε να ασφαλίσουμε την Azure SQL μέσα σε ένα VNET. Ας ξεκινήσουμε λοιπόν να βάλουμε την Azure SQL μέσα σε ένα VNET. Ανοίγουμε το Azure Portal και ξεκινάμε να δημιουργήσουμε ένα VNET. Στο τέλος της σελίδας δημιουργίας έχει προστεθεί μια νέα επιλογή που λέγετε service endpoints. Το ενεργοποιούμε και επιλέγουμε το Microsoft.Sql. Στη συνέχεια δημιουργούμε μια SQL Database. Πάλι από το Azure Portal επιλέγουμε New –> SQL Database και βάζουμε ότι στοιχεία θέλουμε. Αφού δημιουργηθεί η SQL Database, ανοίγουμε τις ρυθμίσεις και πηγαίνουμε στο Firewall / Virtual Networks. Εκεί απενεργοποιούμε το «Allow access to Azure Services». Με αυτήν την επιλογή κόβουμε την πρόσβαση στην SQL από την Public IP. Για να συνδέσουμε την SQL στο VNET πατάμε το «+Add existing virtual network» και δημιουργούμε έναν κανόνα όπου επιλέγουμε το VNET που δημιουργήσαμε με ενεργοποιημένα τα service endpoints. Η ώρα της δοκιμής. Ένας γρήγορος τρόπος να δοκιμάσουμε την συνδεσιμότητα μιας SQL είναι το «ODBC Data Source Administrator» το οποίο βρίσκετε στα Administrative Tools σε όλα τα λειτουργικά MS Windows Server & Professional clients. Αν προσπαθήσετε να συνδεθείτε over internet θα δείτε ότι η σύνδεση κόβετε σε επίπεδο TCP, δεν ανοίγει καν η σύνδεση, σαν να μην υπάρχει. Έφτιαξα λοιπόν ένα VM μέσα στο VNET για να έχω τοπική πρόσβαση. Ανοίγουμε το ODBC Data Source Administrator, και στα User DSN πατάμε new connection. Για όνομα δίνουμε ότι θέλουμε, δεν έχει σημασία και στο server δίνουμε το FQDN του Azure SQL Database. Στην επόμενη εικόνα δίνουμε username και password του Azure SQL Database και πατάμε «Test Data Source» Επίσης μπορούμε να συνδεθούμε με SMSS, βάζοντας το SQL Server FQDN, το username και το password και συνδέεται γρήγορα και με ασφάλεια!
  6. Azure Update Management Have you checked the update management system for your Azure and On-Premises server that supports both Windows and Linux operating systems? And it is completely free! Please find the full list of supported operating systems and prerequisites here: https://docs.microsoft.com/en-us/azure/operations-management-suite/oms-solution-update-management#prerequisites. Lets get started. The easiest way is to start from an Azure VM. Go to the VMs blade and find “Update management”. You will see a notification that the solution is not enabled. Click the notification and the “Update Management” blade will open. The “Update Management” is an OMS solution, so you will need to create a “Log analytics” workspace, you can use the Free tier. If you don’t have a Log analytics workspace the wizard will create a default for you. Also it will create an automation account. Pressing enable will enable the “Update Management” solution. After about 15 minutes, at the “Update Management” section of the VM you will see the report of the VM’s updates. After that process the Automation Account is created and we can browse to the “Automation Accounts” service at the Azure Portal. There click the newly created Automation Account and scroll to the “Update Management” section. There we can see a full report of all VMs that we will add to the Update Management solution. To add more Azure VMs simply click the “Add Azure VM” button. The Virtual Machines blade will open and will list all Virtual Machines at the tenant. Select each VM and press Enable. After all required VMs are added to the Update Management solution click the “Schedule update deployment” button. There we will select the OS type of the deployment, the list of computers to update, what type of updates will deploy and the scheduler. More or less this is something familiar for anyone that has worked with WSUS. Press the “Computers to Update” to select the Azure VMs for this deployment from the list of all VMs enabled. Then select what types of updates will deploy. If you want to exclude any specific update you can add the KB number at the “Excluded updated” blade. And finally select the schedule that the update deployment will run. Back to the “Update Management” blade, as we already said, we have a complete update monitoring of all Virtual Machines that are part of the “Update Management” solution. You can also go to the “Log Analytics” workspase and open the “OMS Portal” There, among other, you will see the newly added “System Update Assessment” solution. and have a full monitoring and reporting of the updates of your whole environment. [/url] The post Azure Update Management appeared first on Apostolidis IT Corner. Source
  7. Azure Update Management Have you checked the update management system for your Azure and On-Premises server that supports both Windows and Linux operating systems? And it is completely free! Please find the full list of supported operating systems and prerequisites here: https://docs.microsoft.com/en-us/azure/operations-management-suite/oms-solution-update-management#prerequisites. Lets get started. The easiest way is to start from an Azure VM. Go to the VMs blade and find “Update management”. You will see a notification that the solution is not enabled. Click the notification and the “Update Management” blade will open. The “Update Management” is an OMS solution, so you will need to create a “Log analytics” workspace, you can use the Free tier. If you don’t have a Log analytics workspace the wizard will create a default for you. Also it will create an automation account. Pressing enable will enable the “Update Management” solution. After about 15 minutes, at the “Update Management” section of the VM you will see the report of the VM’s updates. After that process the Automation Account is created and we can browse to the “Automation Accounts” service at the Azure Portal. There click the newly created Automation Account and scroll to the “Update Management” section. There we can see a full report of all VMs that we will add to the Update Management solution. To add more Azure VMs simply click the “Add Azure VM” button. The Virtual Machines blade will open and will list all Virtual Machines at the tenant. Select each VM and press Enable. After all required VMs are added to the Update Management solution click the “Schedule update deployment” button. There we will select the OS type of the deployment, the list of computers to update, what type of updates will deploy and the scheduler. More or less this is something familiar for anyone that has worked with WSUS. Press the “Computers to Update” to select the Azure VMs for this deployment from the list of all VMs enabled. Then select what types of updates will deploy. If you want to exclude any specific update you can add the KB number at the “Excluded updated” blade. And finally select the schedule that the update deployment will run. Back to the “Update Management” blade, as we already said, we have a complete update monitoring of all Virtual Machines that are part of the “Update Management” solution. You can also go to the “Log Analytics” workspase and open the “OMS Portal” There, among other, you will see the newly added “System Update Assessment” solution. and have a full monitoring and reporting of the updates of your whole environment. [/url] The post Azure Update Management appeared first on Apostolidis IT Corner. Source
  8. Custom pfSense on Azure Rm | a complete guide A complete guide on how to create a pfSense VM on a local Hyper-V server, prepare it for Microsoft Azure, upload the disk to Azure and create a multi-NIC VM. Download the latest image from https://www.pfsense.org/download/ Open Hyper-V Manager create a Generation 1 VM. I added 4096 ram, 2 cores, use VHD, add an extra NIC (for second interface) and select the downloaded ISO. (create a fixed VHD as Azure supports only fixed VHDs for custom VMs) Start the VM and at the first screen press enter. At all screens I accepted the default settings. Finally at the reboot prompt remove the installation ISO. There is no need to setup VLANs, select the second interface for WAN and the first for LAN. Once the pfSense is ready press 2 and change the LAN (hn0) interface IP to one at your network. Then select the option 14 to enable SSH. Now we can login with putty, with username admin password pfsense and press 8 for Shell access. The first thing is to update the packages running: pkg upgrade Python Then install Python, as it is requirement for the Azure Linux Agent. Search for Python packages running: pkg search python Install the latest Python package, setup tools and bash: pkg install -y python27-2.7.14 pkg search setuptoolspkg install py27-setuptools-36.2.2ln -s /usr/local/bin/python /usr/local/bin/python2.7pkg install -y bash Azure Linux Agent ref: https://docs.microsoft.com/en-us/azure/virtual-machines/linux/classic/freebsd-create-upload-vhd pkg install gitgit clone https://github.com/Azure/WALinuxAgent.gicd WALinuxAgentgit taggit checkout WALinuxAgent-2.1.1git checkout WALinuxAgent-2.0.16python setup.py installln -sf /usr/local/sbin/waagent /usr/sbin/waagent check the agent is running: waagent -Version One final step before uploading the VHD to Azure is to set the LAN interface as dhcp. This can be done by the web interface, go to https://lanaddress, login using admin / pfsense, and go to interfaces / LAN and select DHCPas ipv4 configuration. Now, shutdown the pfSense and upload it to Azure Storage. I use the Storage Explorer, https://azure.microsoft.com/en-us/features/storage-explorer/ a free and powerful tool to manage Azure Storage. Login to your Azure Account and press Upload. Select as Blob type: “Page blob” After the upload is completed we can create a multiple NIC VM. This cannot be accomplished from GUI. We will create this using PowerShell. $ResourceGroupName = "******"$pfresourcegroup = "*******"$StorageAccountName = "******"$vnetname = "*****"$NSGname = "******"$location = "West Europe"$vnet = Get-AzureRmVirtualNetwork -Name $vnetname -ResourceGroupName $ResourceGroupName$backendSubnet = Get-AzureRMVirtualNetworkSubnetConfig -Name default -VirtualNetwork $vnet$vmName="pfsense"$vmSize="Standard_F1"$vnet = Get-AzureRmVirtualNetwork -Name $vnetname -ResourceGroupName $ResourceGroupName$pubip = New-AzureRmPublicIpAddress -Name "PFPubIP" -ResourceGroupName $pfresourcegroup -Location $location -AllocationMethod Dynamic$nic1 = New-AzureRmNetworkInterface -Name "EXPFN1NIC1" -ResourceGroupName $pfresourcegroup -Location $location -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pubip.Id$nic2 = New-AzureRmNetworkInterface -Name "EXPFN1NIC2" -ResourceGroupName $pfresourcegroup -Location $location -SubnetId $vnet.Subnets[0].Id$VM = New-AzureRmVMConfig -VMName $vmName -VMSize $vmSize$VM | Set-AzureRmVMOSDisk ` -VhdUri https://********.blob.core.windows.net/vhds/pfsensefix.vhd ` -Name pfsenseos -CreateOption attach -Linux -Caching ReadWrite$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic1.Id$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic2.Id$vm.NetworkProfile.NetworkInterfaces.Item(0).Primary = $trueNew-AzureRMVM -ResourceGroupName $pfresourcegroup -Location $locationName -VM $vm -Verbose Once the VM is created, go to the VM’s blade and scroll down to “Boot diagnostics”. There you can see a screenshot of the VM’s monitor. Then go to the Networking section and SSH to the Public IP. and also we can login to the Web Interface of the pfSense In my case I have added both NICs at the same Subnet, but at a production environment add the LAN interface to the backend subnet and the WAN interface to the DMZ (public) subnet. Of course more NICs can be added to the VM, one for each Subnet at our environment.Route external traffic through the pfSense We cannot change the gateway at an Azure VM, but we can use routing tables to route the traffic through the pfSense. From the Azure Portal, select New and search for Route table. We need to configure two things. One is to associate the Route table to a Subnet and the second is to create a Route. Open the “Route table” and click the “Routes”. Press “Add route” and in order to route all outbound traffic through the pfSense then add for Address prefix “0.0.0.0”, next hop type Virtual appliance” and Net hop address the ip address of the pfSense’s LAN interface IP. Then go to the “Subnets” and associate the required subnets. [/url] The post Custom pfSense on Azure Rm | a complete guide appeared first on Apostolidis IT Corner. Source
  9. Azure File Sync & DFS Namespace Azure File Sync is a new Azure feature, still in preview, that allows to sync a folder between your local file server and Azure Files. This way your files are accessible both locally at your file server and publicly at Azure Files using an SMB 3.0 client. Also the files can be protected online using Azure Backup. The idea of this post is to have the files of two file servers to sync to Azure Files using Azure File Sync and in addition use the DFS Namespace feature to achieve common name and availability. This is not something officially supported, it is just an idea on using two different technologies to help for a service. The requirement before starting the Azure File Sync is to create an Azure File share. We have covered this at a previews post, check here Once the Azure Files share is ready, proceed with the Azure File Sync resource. At the Azure Portal press New and search for it and create it. At the Deploy Storage Sync blade select a name for the Resource, subscription, resource group and location. When the Azure File Sync is ready we need to create a Sync group. Sync group is something like the DFS Replication Group. It is a group that consists of an Azure File Share and many local file servers that syncs a folder. Press “+Sync group” it will open the new “Sync group” blade. There provide a name for the Sync group and select the storage account and the Azure File Share created before. The Sharegroup is ready with the cloud endpoint. The next step is to add the first local file server.Register the local servers Navigate to https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-server-registration for information on how to download the agent, install it and register the server. After that press “Add server endpoint” At the “Add server endpoint” blade, select the registered server and add the path to the folder that has the data you want to sync. With Cloud Tiering you select a percent of the volume of the local server. When the capacity of the volume reaches this number then Azure File Share makes the files that are less frequently accessed cloud only. The file icon on the server get transparent and if anyone double clicks the file then it is downloaded instantly. Register the second server the same way as the first and finally the share group will have two server endpoints. At my example the second server had no data, just the folder, and the Azure File Sync synced all files from server A. Create a DFS Namespace The next step is to create a DFS Namespace, just the namespace with the two local servers. Add the folders of both servers and you are ready. Also if you browse the Azure File Share, all files are accessible Notes from the field Adding or changing a file at the first server, almost instantly replicates to Azure File Share and to the second server. Altering a file at both servers instantly it will keep the last accessed by timestamp as is and the other file will be renamed by adding the server name at the file name, as the example “enaneoarxeio-AzureFS2.txt” where AzureFS2 is the server name. You can add an Azure Backup and have a Cloud Backup of all your files. [/url] The post Azure File Sync & DFS Namespace appeared first on Apostolidis IT Corner. Source
  10. Azure File Sync & DFS Namespace Azure File Sync is a new Azure feature, still in preview, that allows to sync a folder between your local file server and Azure Files. This way your files are accessible both locally at your file server and publicly at Azure Files using an SMB 3.0 client. Also the files can be protected online using Azure Backup. The idea of this post is to have the files of two file servers to sync to Azure Files using Azure File Sync and in addition use the DFS Namespace feature to achieve common name and availability. This is not something officially supported, it is just an idea on using two different technologies to help for a service. The requirement before starting the Azure File Sync is to create an Azure File share. We have covered this at a previews post, check here Once the Azure Files share is ready, proceed with the Azure File Sync resource. At the Azure Portal press New and search for it and create it. At the Deploy Storage Sync blade select a name for the Resource, subscription, resource group and location. When the Azure File Sync is ready we need to create a Sync group. Sync group is something like the DFS Replication Group. It is a group that consists of an Azure File Share and many local file servers that syncs a folder. Press “+Sync group” it will open the new “Sync group” blade. There provide a name for the Sync group and select the storage account and the Azure File Share created before. The Sharegroup is ready with the cloud endpoint. The next step is to add the first local file server. Register the local servers Navigate to https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-server-registrationfor information on how to download the agent, install it and register the server. After that press “Add server endpoint” At the “Add server endpoint” blade, select the registered server and add the path to the folder that has the data you want to sync. With Cloud Tiering you select a percent of the volume of the local server. When the capacity of the volume reaches this number then Azure File Share makes the files that are less frequently accessed cloud only. The file icon on the server get transparent and if anyone double clicks the file then it is downloaded instantly. Register the second server the same way as the first and finally the share group will have two server endpoints. At my example the second server had no data, just the folder, and the Azure File Sync synced all files from server A. Create a DFS Namespace The next step is to create a DFS Namespace, just the namespace with the two local servers. Add the folders of both servers and you are ready. Also if you browse the Azure File Share, all files are accessible Notes from the field Adding or changing a file at the first server, almost instantly replicates to Azure File Share and to the second server. Altering a file at both servers instantly it will keep the last accessed by timestamp as is and the other file will be renamed by adding the server name at the file name, as the example “enaneoarxeio-AzureFS2.txt” where AzureFS2 is the server name. You can add an Azure Backup and have a Cloud Backup of all your files.
  11. Bulletproof manage your Azure VMs Continuing the Azure Security Center posts, today we will see a new feature of the Security Center, called Just in Time VM Access. As best security practice, all the management ports of a Virtual Machine should be closed using Network Security Groups. Only the ports required for any published services should be opened, if any. However there are many occasions that we are requested to open a management port for administration or a service port for some tests for short time. This action has two major problems, first it requires a lot of administration time, because the administrator must go to the Azure Portal and add a rule at the VM’s NSG. The second problem is that many time the port is forgotten open and this is a major vulnerability since the majority of the Brute Force attacks are performed to the management ports, 22 and 3389. Here comes the Azure Security Center, with the Just in Time VM Access feature. With this feature we can use the RBAC of the azure Portal and allow specific users to Request a predefined port to be opened for a short time frame.JIT Configuration Lets see how we configure the JIT. First we need to go to the Azure Security Center. Scroll down to the ADVANCED CLOUD DEFENSE and click the “Just in time VM Access”. Since it is at a Preview you need to press the “Try Just in time VM access” After we enable JIT, the window displays tree tabs, the Configured, the Recommended and the No recommendation. The Configured tab displays the Virtual Machines that we have already enabled JIT. The recommended are VMs that have NSGs and are recommended to be enabled for JIT. The No recommendation are Classic VMs or VMs that don’t have attached NSG. To enable JIT for a VM, go to the Recommended tab, select one or more VMs and press “Enable JIT on x VMs” At the “JIT VM access configuration” the Security Center proposes rule with the default management ports. We can add other ports that we need and also remove any of them that are unnecessary. At each rule we can configure the Port, the Protocol, the Source IP and the Maximum request time. If we leave the “Allowed source IPs” to “Per request” then we allow the requester to decide. One very interesting setting here is that when a user requests access it has the option to allow only the Public IP that he is using at that time automatically. With the last option, the “Max request time” we narrow down the maximum time that we will allow a port to be opened. After we configure all the parameters we click Save and the VM moves to the Configured tab. At any time we can change the configuration by selecting the VM, press the three dots at the end of the line (…) and click Edit. The Propertied button opens the VM’s blade, the Activity log shows all the users that requested access and the Remove of course disabled the JIT.Behind the scene What really happens to the VM? if you browse to the NSG that is attached to the VM you will see that all the port rules configured at the JIT are added as NSG Rules with lower priority than all the other rules. All other rules automatically changed priority to higher. Lets see how we request access and what happens in the background. To request access go to the Security Center / JIT , select the VM and press “Request Access” At the “Request access” blade switch on the desired port, select “My IP” or “IP Range” and the Timerange, all according to the JIT configuration of the VM. Finally press “Open Ports” At the above example I select “My IP” so if you go to the VM’s NSG you will see that the 3389 port rule changed to “Allow” and for Source has my current Public IP. Also it moved at first priority. After the expiration of the time rage the port will change to “Deny” and move back to its prior priority. [/url] The post Bulletproof manage your Azure VMs appeared first on Apostolidis IT Corner. Source
  12. Bulletproof manage your Azure VMs Continuing the Azure Security Center posts, today we will see a new feature of the Security Center, called Just in Time VM Access. As best security practice, all the management ports of a Virtual Machine should be closed using Network Security Groups. Only the ports required for any published services should be opened, if any. However there are many occasions that we are requested to open a management port for administration or a service port for some tests for short time. This action has two major problems, first it requires a lot of administration time, because the administrator must go to the Azure Portal and add a rule at the VM’s NSG. The second problem is that many time the port is forgotten open and this is a major vulnerability since the majority of the Brute Force attacks are performed to the management ports, 22 and 3389. Here comes the Azure Security Center, with the Just in Time VM Access feature. With this feature we can use the RBAC of the azure Portal and allow specific users to Request a predefined port to be opened for a short time frame. JIT Configuration Lets see how we configure the JIT. First we need to go to the Azure Security Center. Scroll down to the ADVANCED CLOUD DEFENSE and click the “Just in time VM Access”. Since it is at a Preview you need to press the “Try Just in time VM access” After we enable JIT, the window displays tree tabs, the Configured, the Recommended and the No recommendation. The Configured tab displays the Virtual Machines that we have already enabled JIT. The recommended are VMs that have NSGs and are recommended to be enabled for JIT. The No recommendation are Classic VMs or VMs that don’t have attached NSG. To enable JIT for a VM, go to the Recommended tab, select one or more VMs and press “Enable JIT on x VMs” At the “JIT VM access configuration” the Security Center proposes rule with the default management ports. We can add other ports that we need and also remove any of them that are unnecessary. At each rule we can configure the Port, the Protocol, the Source IP and the Maximum request time. If we leave the “Allowed source IPs” to “Per request” then we allow the requester to decide. One very interesting setting here is that when a user requests access it has the option to allow only the Public IP that he is using at that time automatically. With the last option, the “Max request time” we narrow down the maximum time that we will allow a port to be opened. After we configure all the parameters we click Save and the VM moves to the Configured tab. At any time we can change the configuration by selecting the VM, press the three dots at the end of the line (…) and click Edit. The Propertied button opens the VM’s blade, the Activity log shows all the users that requested access and the Remove of course disabled the JIT. Behind the scene What really happens to the VM? if you browse to the NSG that is attached to the VM you will see that all the port rules configured at the JIT are added as NSG Rules with lower priority than all the other rules. All other rules automatically changed priority to higher. Lets see how we request access and what happens in the background. To request access go to the Security Center / JIT , select the VM and press “Request Access” At the “Request access” blade switch on the desired port, select “My IP” or “IP Range” and the Timerange, all according to the JIT configuration of the VM. Finally press “Open Ports” At the above example I select “My IP” so if you go to the VM’s NSG you will see that the 3389 port rule changed to “Allow” and for Source has my current Public IP. Also it moved at first priority. After the expiration of the time rage the port will change to “Deny” and move back to its prior priority.
  13. Use Azure Security Center to protect your workloads At this series of posts we will make a walk along the Azure Security Center, to see some common usage scenarios. Like how we can use it to protect from a Virtual Machine to a whole Data Center. To make it easier to understand we will start with a typical Azure IaaS scenario. A Virtual Machine with IIS role to act as Web Server. The steps to create the VM is out this post’s scope. I will simply describe the process. First we create a Windows Server 2016 Virtual Machine. Second we log in and add the Web Server (IIS) role. Third we open the port 80 at the VM’s Network Security Group (NSG) and voila we can browse at the Azure DNS name of the VM and see the IIS default landing page. At this point the security of the Web Server is relying on the Network Security Rule, a layer 3 firewall that allows access to the port 80 and of course the Windows Firewall that does exactly the same. Lets browse to the Azure Security Center from the Azure Portal. There we see an overview of security settings for the whole subscription. First, click the “Compute”. I will skip the overview and go directly to the “VMs and computers” tab. There we see the name of the VM and the five points of interest. Our VM is not monitored, it doesn’t have endpoint protection and it reports some vulnerabilities. Recommendation: Enable data collection for subscriptions To start resolving the issues click the VM to go to the Recommendations blade. The first recommendation says to enable data collection for the subscription. Of course this is the Log Analytics, OMS (Operations Management Suite) integration. This will enable the subscription resources to report to log analytics. Press the “Enable data collection for subscription”. The Data Collection blade will open. There we can enable or disable the automatic provision of the monitoring agent. This is the Microsoft Monitoring Agent that connects a Virtual Machine to Log Analytics and also we can use it for connecting to SCOM. The second option is to chose a workspace. IF you have already created an OMS workspace you can choose it. If not let it create a new one automatically. Finally press save. Returning to the previous blade you will see that the “Turn on data collection” recommendation, is now in Resolved state. Although this recommendation is resolved instantly, the Microsoft Monitoring Agent is not yet installed. Go back to the Compute / Data collection installation status to see the agent installation status. Stay tuned for the next Azure Security Center post to resolve more recommendations. [/url] The post Use Azure Security Center to protect your workloads appeared first on Apostolidis IT Corner. Source
  14. Use Azure Security Center to protect your workloads At this series of posts we will make a walk along the Azure Security Center, to see some common usage scenarios. Like how we can use it to protect from a Virtual Machine to a whole Data Center. To make it easier to understand we will start with a typical Azure IaaS scenario. A Virtual Machine with IIS role to act as Web Server. The steps to create the VM is out this post’s scope. I will simply describe the process. First we create a Windows Server 2016 Virtual Machine. Second we log in and add the Web Server (IIS) role. Third we open the port 80 at the VM’s Network Security Group (NSG) and voila we can browse at the Azure DNS name of the VM and see the IIS default landing page. At this point the security of the Web Server is relying on the Network Security Rule, a layer 3 firewall that allows access to the port 80 and of course the Windows Firewall that does exactly the same. Lets browse to the Azure Security Center from the Azure Portal. There we see an overview of security settings for the whole subscription. First, click the “Compute”. I will skip the overview and go directly to the “VMs and computers” tab. There we see the name of the VM and the five points of interest. Our VM is not monitored, it doesn’t have endpoint protection and it reports some vulnerabilities. Recommendation: Enable data collection for subscriptions To start resolving the issues click the VM to go to the Recommendations blade. The first recommendation says to enable data collection for the subscription. Of course this is the Log Analytics, OMS (Operations Management Suite) integration. This will enable the subscription resources to report to log analytics. Press the “Enable data collection for subscription”. The Data Collection blade will open. There we can enable or disable the automatic provision of the monitoring agent. This is the Microsoft Monitoring Agent that connects a Virtual Machine to Log Analytics and also we can use it for connecting to SCOM. The second option is to chose a workspace. IF you have already created an OMS workspace you can choose it. If not let it create a new one automatically. Finally press save. Returning to the previous blade you will see that the “Turn on data collection” recommendation, is now in Resolved state. Although this recommendation is resolved instantly, the Microsoft Monitoring Agent is not yet installed. Go back to the Compute / Data collection installation status to see the agent installation status. Stay tuned for the next Azure Security Center post to resolve more recommendations.
  15. Use Service Endpoints to protect an Azure Storage Account inside an Azure Azure Virtual Network As we have already saw at a previews post, we can use the Service Endpoints to protect an Azure SQL Server inside an Azure Virtual Network. Today we will see how we can protect a Storage Account. First we need to enable the Microsoft.Storage Service Endpoint to an existing Virtual Network or create a new Virtual Network and enable it. At this port I am creating a new Virtual Network, so at the Azure Portal press New and at the search box type “Virtual Network”. Enter the name of the Virtual Network and all the required fields. The only difference is to click “Enable” at the Service Endpoints and select the “Microsoft.Storage”. After the Virtual Network we can proceed with the Storage Account. Create a Storage Account by going to Azure Portal, press New, search for “Storage Account” and press Create. At the “Create storage account” blade enter all the required fields. The difference here is to click “Enable” at the “Virtual Networks” and select the Virtual Network that you have enabled “Service Endpoints” and select the desired subnet. After the Storage Account creation, open the Storage Account and go to the “Firewall and virtual network” setting. and you will see that the selected Virtual Network and Subnet are configured and all other networks and the Internet access are forbidden. Now if you go to the File Service of the Storage Account you will get an “Access Denied” message, since you are accessing from the Internet. In order to access the Storage Account File Service (And all other services like blob) I created a Virtual Machine inside the Virtual Network and opened the Portal from it. Now I can access the Storage Account services. Of course we can add our Public IP and access the Storage Account configuration, make the required changes and then remove it. Also we can add / remove existing and new networks [/url] The post Use Service Endpoints to protect an Azure Storage Account inside an Azure Azure Virtual Network appeared first on Apostolidis IT Corner. Source
  16. Use Service Endpoints to protect an Azure Storage Account inside an Azure Azure Virtual Network As we have already saw at a previews post, we can use the Service Endpoints to protect an Azure SQL Server inside an Azure Virtual Network. Today we will see how we can protect a Storage Account. First we need to enable the Microsoft.Storage Service Endpoint to an existing Virtual Network or create a new Virtual Network and enable it. At this port I am creating a new Virtual Network, so at the Azure Portal press New and at the search box type “Virtual Network”. Enter the name of the Virtual Network and all the required fields. The only difference is to click “Enable” at the Service Endpoints and select the “Microsoft.Storage”. After the Virtual Network we can proceed with the Storage Account. Create a Storage Account by going to Azure Portal, press New, search for “Storage Account” and press Create. At the “Create storage account” blade enter all the required fields. The difference here is to click “Enable” at the “Virtual Networks” and select the Virtual Network that you have enabled “Service Endpoints” and select the desired subnet. After the Storage Account creation, open the Storage Account and go to the “Firewall and virtual network” setting. and you will see that the selected Virtual Network and Subnet are configured and all other networks and the Internet access are forbidden. Now if you go to the File Service of the Storage Account you will get an “Access Denied” message, since you are accessing from the Internet. In order to access the Storage Account File Service (And all other services like blob) I created a Virtual Machine inside the Virtual Network and opened the Portal from it. Now I can access the Storage Account services. Of course we can add our Public IP and access the Storage Account configuration, make the required changes and then remove it. Also we can add / remove existing and new networks
  17. Protect your Web App using Azure Application Gateway Web Application Firewall Web Application Firewall was always a big investment for a small or growing company as most of the top branded companies are charging a lot of money A Web Application Firewall protects your application from common web vulnerabilities and exploits like SQL Injection or Cross site scripting. Azure provides enterprise grade Web Application Firewall through the Application Gateway. It comes in two pricing models, Medium and Large. More about sizes and instances you can find here, and more about pricing here We can add the Application Gateway Web Application Firewall to protect our Azure Web App (PaaS) and our Web Application inside a VMs web server (IaaS). At this post we will see how to protect them both. One difference in order to fully protect the Azure Web App (PaaS) is to create an App Service Environment with internal VIP to host the Web App in order to hide it inside a VNET. First things first, create a VNET with one subnet for the Application Gateway WAF.App Service Environment After the VNET create the App Service Environment, from the Azure Portal, New –> App Service Environment and select VIP Type “Internal”. Add it to the VNET created before and create a subnet for the ASE. You need to be patient here because the deploy will take more than an hour, almost two. Web App As soon as the App Service Environment is ready we can create our Web App. Create a Web App from Azure Portal with one difference, on the App Service Plan location instead of selecting a Region select he App Service Environment. As you realize, the Web App resides at the internal VNET with no access from the internet. So, in order to access the application at this point we need a VM ( a small one just to test and deploy our application ). Create a small VM and add it to this VNET. One small detail, in order to be able to browse to the site’s URL we need to enter the FQDN, in our case papwaf3app.funniest.gr. In order to do this we need an entry at the VM’s host file. This way we can access the new born Web App. Web Application Firewall Lets create the Secure public entry point for our Web App. Create an application gateway, select WAF Tier, select the required SKU, add it to the WAF subnet we created before, select Public IP configuration and WAF enabled. When the Application gateway is ready we need to do some configuration. First at the Backend pools, open the default created backend pool add the Internal Load Balancer IP address of the ASE as target. Then add a health probe. For host add the FQDN of the Web App. at the HTTP settings check the “Use custom probe” and select the previously created probe. And that’s all. Now we can try our Web App from the Internet. In order to do so we need to browse to the Web App’s URL, that is now published by the Application Gateway, from the Internet. So, we need to create a Public DNS record to point the FQDN to the Application Gateway’s FQDN. In this case we need to crate a CNAME papwaf3app.funniest.gr to point to the 8b0510c1-47e9-4b94-a0ff-af92e4455840.cloudapp.net. In order to test the app right now we can just add a host file to our computer pointing to the Public IP Address of the application gateway and we can access the Web App behind the WAF. Logging In order to be able to see the Application Gateway and Web Application Firewall logs we need to turn on diagnostics. The easiest way to see the logs is by sending them to Log Analytics (OMS). With the Firewall at “Detection” mode, if we try an SQL Injection (?id=10||UTL_INADDR.GET_HOST_NAME( (SELECT user FROM DUAL) )–), the Web App still servers the landing page. By switching the Firewall to “Prevention” mode, the same SQL injection attach stops by the WAF before accessing our Web App. Protect an IaaS Web Application To add a Web Application that runs inside a VM behind the Application Gateway Web Application Firewall, first add the VM as a Back End Pool. Create a new Backend Pool and select “Virtual Machine”. Select the Virtual Machine that runs the Web Application. Then create a new probe adding the URL of the Web Application next add HTTP settings and add custom probe the new created probe “vmsite” Next step is to create two multi-site listeners, one for each host name After the listener, add a Basic rule using the Listener, Backend Pool and HTTP settings we created for the VM Web Application, Finally one extra step is to change the default rule1 to listen to the WeB App listener Finally the Application Gateway Web Application Firewall provides secure access to both the Web App (PaaS) and the VM Web Application (IaaS) [/url] The post Protect your Web App using Azure Application Gateway Web Application Firewall appeared first on Apostolidis IT Corner. Source
  18. Protect your Web App using Azure Application Gateway Web Application Firewall Web Application Firewall was always a big investment for a small or growing company as most of the top branded companies are charging a lot of money A Web Application Firewall protects your application from common web vulnerabilities and exploits like SQL Injection or Cross site scripting. Azure provides enterprise grade Web Application Firewall through the Application Gateway. It comes in two pricing models, Medium and Large. More about sizes and instances you can find here, and more about pricing here We can add the Application Gateway Web Application Firewall to protect our Azure Web App (PaaS) and our Web Application inside a VMs web server (IaaS). At this post we will see how to protect them both. One difference in order to fully protect the Azure Web App (PaaS) is to create an App Service Environment with internal VIP to host the Web App in order to hide it inside a VNET. First things first, create a VNET with one subnet for the Application Gateway WAF. App Service Environment After the VNET create the App Service Environment, from the Azure Portal, New –> App Service Environment and select VIP Type “Internal”. Add it to the VNET created before and create a subnet for the ASE. You need to be patient here because the deploy will take more than an hour, almost two. Web App As soon as the App Service Environment is ready we can create our Web App. Create a Web App from Azure Portal with one difference, on the App Service Plan location instead of selecting a Region select he App Service Environment. As you realize, the Web App resides at the internal VNET with no access from the internet. So, in order to access the application at this point we need a VM ( a small one just to test and deploy our application ). Create a small VM and add it to this VNET. One small detail, in order to be able to browse to the site’s URL we need to enter the FQDN, in our case papwaf3app.funniest.gr. In order to do this we need an entry at the VM’s host file. This way we can access the new born Web App. Web Application Firewall Lets create the Secure public entry point for our Web App. Create an application gateway, select WAF Tier, select the required SKU, add it to the WAF subnet we created before, select Public IP configuration and WAF enabled. When the Application gateway is ready we need to do some configuration. First at the Backend pools, open the default created backend pool add the Internal Load Balancer IP address of the ASE as target. Then add a health probe. For host add the FQDN of the Web App. at the HTTP settings check the “Use custom probe” and select the previously created probe. And that’s all. Now we can try our Web App from the Internet. In order to do so we need to browse to the Web App’s URL, that is now published by the Application Gateway, from the Internet. So, we need to create a Public DNS record to point the FQDN to the Application Gateway’s FQDN. In this case we need to crate a CNAME papwaf3app.funniest.gr to point to the 8b0510c1-47e9-4b94-a0ff-af92e4455840.cloudapp.net. In order to test the app right now we can just add a host file to our computer pointing to the Public IP Address of the application gateway and we can access the Web App behind the WAF. Logging In order to be able to see the Application Gateway and Web Application Firewall logs we need to turn on diagnostics. The easiest way to see the logs is by sending them to Log Analytics (OMS). With the Firewall at “Detection” mode, if we try an SQL Injection (?id=10||UTL_INADDR.GET_HOST_NAME( (SELECT user FROM DUAL) )–), the Web App still servers the landing page. By switching the Firewall to “Prevention” mode, the same SQL injection attach stops by the WAF before accessing our Web App. Protect an IaaS Web Application To add a Web Application that runs inside a VM behind the Application Gateway Web Application Firewall, first add the VM as a Back End Pool. Create a new Backend Pool and select “Virtual Machine”. Select the Virtual Machine that runs the Web Application. Then create a new probe adding the URL of the Web Application next add HTTP settings and add custom probe the new created probe “vmsite” Next step is to create two multi-site listeners, one for each host name After the listener, add a Basic rule using the Listener, Backend Pool and HTTP settings we created for the VM Web Application, Finally one extra step is to change the default rule1 to listen to the WeB App listener Finally the Application Gateway Web Application Firewall provides secure access to both the Web App (PaaS) and the VM Web Application (IaaS)
  19. Secure your Azure SQL locally inside your vnet using service endpoints For many companies, a throwback of using Azure SQL was the Public Access. After the latest Azure updates you can use the service endpoints to Secure your Azure SQL locally inside your vnet! For the time, the feature is available only at the West Central US, West US 2, and East US regions but soon more will follow. So, lets secure your Azure SQL locally inside your vnet! At the VNET creation blade, select the Microsoft.Sql service endpoint from the list of the available service endpoints. Then create an SQL Database at the same region, Next, go to the SQL server firewall settings and turn Off the “Allow access to Azure services”. By doing this you disable the access to the SQL Server using the Public IP. Click the “Add existing virtual network” and create an access rule, in order to be able to access the SQL Server from your Virtual Network using the service endpoints. Now lets test. A fast way to test your SQL connectivity from a Virtual Machine on the VNET, without having the SQL management tools, is to open the “ODBC Data Source Administrator” and create a new connection. Add the Azure SQL Server IP at the next screen enter the username and password of your SQL Server and finally click the “Test Data Source” Of course we can also connect with the SMSS. Add the SQL Server FQDN, the username and the password and you are connected, fast and securely! You cannot yet add your SQL to a subnet, but you secure it’s access inside your VNET! all public access is denied. [/url] The post Secure your Azure SQL locally inside your vnet using service endpoints appeared first on Apostolidis IT Corner. Source
  20. Secure your Azure SQL locally inside your vnet using service endpoints For many companies, a throwback of using Azure SQL was the Public Access. After the latest Azure updates you can use the service endpoints to Secure your Azure SQL locally inside your vnet! For the time, the feature is available only at the West Central US, West US 2, and East US regions but soon more will follow. So, lets secure your Azure SQL locally inside your vnet! At the VNET creation blade, select the Microsoft.Sql service endpoint from the list of the available service endpoints. Then create an SQL Database at the same region, Next, go to the SQL server firewall settings and turn Off the “Allow access to Azure services”. By doing this you disable the access to the SQL Server using the Public IP. Click the “Add existing virtual network” and create an access rule, in order to be able to access the SQL Server from your Virtual Network using the service endpoints. Now lets test. A fast way to test your SQL connectivity from a Virtual Machine on the VNET, without having the SQL management tools, is to open the “ODBC Data Source Administrator” and create a new connection. Add the Azure SQL Server IP at the next screen enter the username and password of your SQL Server and finally click the “Test Data Source” Of course we can also connect with the SMSS. Add the SQL Server FQDN, the username and the password and you are connected, fast and securely! You cannot yet add your SQL to a subnet, but you secure it’s access inside your VNET! all public access is denied. [/url] The post Secure your Azure SQL locally inside your vnet using service endpoints appeared first on Apostolidis IT Corner. Source
  21. Μόλις έλαβα το πρώτο μου Microsoft Azure MVP award! Νοιώθω χαρούμενος και περήφανος που η προσπάθεια και η προσφορά μου στην κοινότητα ανταμείβεται. Πιστεύω στην κοινότητα και στον διαμοιρασμό της γνώσης και αυτό με έχει βοηθήσει πολύ στην ζωή μου και και εγώ με τη σειρά μου προσπαθώ να βοηθήσω στο μέγιστο. Όλα ξεκινάνε με αυτό το υπέροχο email Congratulations! We are extremely pleased to present you with the 2018-2019 Microsoft Most Valuable Professional (MVP) Award! This award is given to exceptional technical community leaders who share their remarkable passion, real-world knowledge, and technical expertise with others through demonstration of exemplary commitment. We appreciate your outstanding contributions in the Microsoft Azure technical communities during the past year. [/url] The post My First Microsoft Azure MVP award! appeared first on Apostolidis IT Corner. Source
  22. Create Azure File Shares at your ARM template using PowerShell Using Azure Resource Manage template deployment, you can create a Storage account but you cannot create File Shares. Azure File Shares can be created using the Azure Portal, the Azure PowerShell or the Azure Cli. Mainly, the idea is to run a PowerShell script that will create the File Shares. This script will be invoked inside the ARM Template. In order to use a PowerShell script from a template, the script must be called from a URL. A good way to provide this is using the Git repository. One major thing to consider is the Storage Account key must be provided to the PowerShell script securely, since the PowerShell script is at a public URL. The PowerShell script will run inside a Virtual Machine and we will use a CustomScriptExtension Extension to provide it. To use this, at the Virtual Machine Resource of the JSON file add a resources section. The Custom Script Exception is located at the Virtual Machine resource. Lets assume that the last part of the Virtual Machine resource is the “diagnosticsProfile” so after the closure of the “diagnosticsProfile” we can add the “resources”. Inside the “resources” add the “extensions” resource that will add the “CustomScriptExtension”, like below.The Template Part This will be the addition at the Virtual Machine resource: "diagnosticsProfile": { "bootDiagnostics": { "enabled": true, "storageUri": "[concat(reference(concat('Microsoft.Storage/storageAccounts/', variables('diagnosticStorageAccountName')), '2016-01-01').primaryEndpoints.blob)]" } } }, "resources": [ { "name": "AzureFileShares", "type": "extensions", "location": "[variables('location')]", "apiVersion": "2016-03-30", "dependsOn": [ "[resourceId('Microsoft.Compute/virtualMachines', parameters('VMName'))]", "[variables('AzureFilesStorageId')]" ], "tags": { "displayName": "AzureFileShares" }, "properties": { "publisher": "Microsoft.Compute", "type": "CustomScriptExtension", "typeHandlerVersion": "1.4", "autoUpgradeMinorVersion": true, "settings": { "fileUris": [ "https://raw.githubusercontent.com/######/#####/master/azurefiles.ps1" ] }, "protectedSettings": { "commandToExecute": "[concat('powershell -ExecutionPolicy Unrestricted -File ','azurefiles.ps1 -SAName ',parameters('AzureFilesStorageName'),' -SAKey ', listKeys(resourceId(variables('AzureFilesStorageAccountResourceGroup'),'Microsoft.Storage/storageAccounts', parameters('AzureFilesStorageName')), '2015-06-15').key1)]" } } } ] }, The extension must be depended from the Virtual Machine that will run the script and the Storage Account that will bu used for the file shares. At the custom script properties add the public RAW url of the PowerShell script. Next lets see the Storage Account key and execution part. At the connandToExecute section, we will provide a variable that will pass the Storage Account key & Name inside the script for execution. The variable will get the Storage Account key from the Storage Account using the permissions of the Account running the Template Deployment. Of course to make the template more flexible I have added a variable for the Resource Group and a parameter for the AzureFilesStorageName, so the template will ask for the Storage Account name at the parameters.The PowerShell The PowerShell script is tested at Windows Server 2016 VM. You can find it below: Param ( [Parameter()] [String]$SAKey, [String]$SAName)Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -ForceSet-PSRepository -Name PSGallery -InstallationPolicy TrustedInstall-Module Azure -Confirm:$FalseImport-Module Azure$storageContext = New-AzureStorageContext -StorageAccountName $SAName -StorageAccountKey $SourceSAKey$storageContext | New-AzureStorageShare -Name ##### [/url] The post Create Azure File Shares at your ARM template using PowerShell appeared first on Apostolidis IT Corner. Source
  23. Create Azure File Shares at your ARM template using PowerShell Using Azure Resource Manage template deployment, you can create a Storage account but you cannot create File Shares. Azure File Shares can be created using the Azure Portal, the Azure PowerShell or the Azure Cli. Mainly, the idea is to run a PowerShell script that will create the File Shares. This script will be invoked inside the ARM Template. In order to use a PowerShell script from a template, the script must be called from a URL. A good way to provide this is using the Git repository. One major thing to consider is the Storage Account key must be provided to the PowerShell script securely, since the PowerShell script is at a public URL. The PowerShell script will run inside a Virtual Machine and we will use a CustomScriptExtension Extension to provide it. To use this, at the Virtual Machine Resource of the JSON file add a resources section. The Custom Script Exception is located at the Virtual Machine resource. Lets assume that the last part of the Virtual Machine resource is the “diagnosticsProfile” so after the closure of the “diagnosticsProfile” we can add the “resources”. Inside the “resources” add the “extensions” resource that will add the “CustomScriptExtension”, like below. The Template Part This will be the addition at the Virtual Machine resource: "diagnosticsProfile": { "bootDiagnostics": { "enabled": true, "storageUri": "[concat(reference(concat('Microsoft.Storage/storageAccounts/', variables('diagnosticStorageAccountName')), '2016-01-01').primaryEndpoints.blob)]" } } }, "resources": [ { "name": "AzureFileShares", "type": "extensions", "location": "[variables('location')]", "apiVersion": "2016-03-30", "dependsOn": [ "[resourceId('Microsoft.Compute/virtualMachines', parameters('VMName'))]", "[variables('AzureFilesStorageId')]" ], "tags": { "displayName": "AzureFileShares" }, "properties": { "publisher": "Microsoft.Compute", "type": "CustomScriptExtension", "typeHandlerVersion": "1.4", "autoUpgradeMinorVersion": true, "settings": { "fileUris": [ "https://raw.githubusercontent.com/######/#####/master/azurefiles.ps1" ] }, "protectedSettings": { "commandToExecute": "[concat('powershell -ExecutionPolicy Unrestricted -File ','azurefiles.ps1 -SAName ',parameters('AzureFilesStorageName'),' -SAKey ', listKeys(resourceId(variables('AzureFilesStorageAccountResourceGroup'),'Microsoft.Storage/storageAccounts', parameters('AzureFilesStorageName')), '2015-06-15').key1)]" } } } ] }, The extension must be depended from the Virtual Machine that will run the script and the Storage Account that will bu used for the file shares. At the custom script properties add the public RAW url of the PowerShell script. Next lets see the Storage Account key and execution part. At the connandToExecute section, we will provide a variable that will pass the Storage Account key & Name inside the script for execution. The variable will get the Storage Account key from the Storage Account using the permissions of the Account running the Template Deployment. Of course to make the template more flexible I have added a variable for the Resource Group and a parameter for the AzureFilesStorageName, so the template will ask for the Storage Account name at the parameters. The PowerShell The PowerShell script is tested at Windows Server 2016 VM. You can find it below: Param ( [Parameter()] [string]$SAKey, [string]$SAName ) Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force Set-PSRepository -Name PSGallery -InstallationPolicy Trusted Install-Module Azure -Confirm:$False Import-Module Azure $storageContext = New-AzureStorageContext -StorageAccountName $SAName -StorageAccountKey $SourceSAKey $storageContext | New-AzureStorageShare -Name ##### read
  24. Add multiple managed disks to Azure RM VM In this post I have created a PowerShell script to help add multiple managed disks to an Azure RM Virtual Machine. The script to add multiple managed disks will prompt you to login to an Azure RM account, then it will query the subscriptions and ask you to select the desired. After that it will query the available VMs and promt to select the target VM from the VM list. At this point I am checking the OS disk and define the storage type of the data disk. If we need to change the storage type we can check the comments at step 4. e.g. If the OS disk is Premium and you want Standard data disks. The next step is to ask for disk size. You can check the sizes and billing here: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/managed-disks-overview#pricing-and-billing Finally it will ask for the number of the disk we need to create. After this input the script will create the disks, attach them to the VM and update it. The Script: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 # 1. You need to login to the Azure Rm Account Login-AzureRmAccount # 2. The script will query the Subscriptions that the login account has access and will promt the user to select the target Subscription from the drop down list $subscription = Get-AzureRmSubscription | Out-GridView -Title "Select a Subscription" -PassThru Select-AzureRmSubscription -SubscriptionId $subscription.Id # 3. The script will query the available VMs and promt to select the target VM from the VM list $vm = Get-AzureRmVM | Out-GridView -Title "Select the Virtual Machine to add Data Disks to" -PassThru # 4. I set the storage type based on the OS disk. If you want to spesify somehting else you can cahnge this to: $storageType = StandardLRS or PremiumLRS etc. $storageType = $VM.StorageProfile.OsDisk.ManagedDisk.StorageAccountType # 5. The script will promt for disk size, in GB $diskSizeinGB = Read-Host "Enter Size for each Data Disk in GB" $diskConfig = New-AzureRmDiskConfig -AccountType $storageType -Location $vm.Location -CreateOption Empty -DiskSizeGB $diskSizeinGB # 6. Enter how many data disks you need to create $diskquantity = Read-Host "How many disks you need to create?" for($i = 1; $i -le $diskquantity; $i++) { $diskName = $vm.Name + "-DataDisk-" + $i.ToString() $DataDisk = New-AzureRmDisk -DiskName $diskName -Disk $diskConfig -ResourceGroupName $vm.ResourceGroupName $lun = $i - 1 Add-AzureRmVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId $DataDisk.Id -Lun $lun } Update-AzureRmVM -VM $vm -ResourceGroupName $vm.ResourceGroupName You can download the script from here: AddManagedDisks
  25. Microsoft Azure Nested Virtualization | VM in Nested VM in Azure VM After my main Microsoft Azure Nested Virtualization | Hyper-V VM inside Azure VM post, we saw two usage scenarios. One is running Hyper-V Replica and the other is running Web Server in nested VM on Azure. Now lets have some fun and try to run a VM nested inside a VM nested inside an Azure VM. As a fellow said, VM inception! We will use again the nested VM that we created at the Microsoft Azure Nested Virtualization | Hyper-V VM inside Azure VM post. First we need to run two commands, one command to enable the virtualization and one to enable the MAC address spoofing. More details you can find at the Nested Virtualization Microsoft article Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true Get-VMNetworkAdapter -VMName <VMName> | Set-VMNetworkAdapter -MacAddressSpoofing On After running the above commands we can go to the Server Manger and add the Hyper-V role. I just click next accepting all the defaults. One exception, I checked the NIC to use it for Virtual Switch. Finally we have a Hyper-V VM that is nested inside a Hyper-V VM that is nested inside an Azure VM
×
×
  • Create New...