CrashPlan on Synology NAS

I recently bought myself a good Synology NAS box.  Shortly after setting it up I started looking at the offsite backup vendors on the market.  In my role as a Infrastructure Lead I have designed and implemented the 3-2-1 Backup strategy for my company, so it was time to automate my valuable home files into something similar.

CrashPlan was recommended to me for being able to backup directly from the NAS box.  The installation and setting up a remote client is easy enough for anyone that works within IT and the backup started rolling and it looked like I would be able to complete the backup set of my most important files (about 700GB) within the trial month.  I was happy and went ahead and paid for the yearly subscription two weeks later.

Then 2-3 weeks after first installing the software, still on my first trial month CrashPlan upgraded their application that forced me to have to update the CrashPlan application running on my NAS box.  After the update the application stopped running, the application log was full of ´Starting CrashPlan´, ´Stopping CrashPlan´.  CrashPlan support wasn´t great, it was then I learnt that running CrashPlan on a Synolgoy NAS box was not supported:

 “We do not design, build, nor test for running directly on NAS devices. Therefore, we do not support this configuration because NAS devices are not typically provisioned with sufficient hardware for CrashPlan to handle the large dataset they create.”

My dataset wasn´t that large and I had even increased my default memory allocation already.  This can be done from your workstation with the client you use to connect to the NAS box (https://support.code42.com/CrashPlan/4/Troubleshooting/Adjusting_CrashPlan_Settings_For_Memory_Usage_With_Large_Backups)

Luckily there is a Unofficial CrashPlan Community out there that does their best to support CrashPlan on NAS devices, credit is due to this person for his support on the forum and this blog post

https://crashplan.setepontos.com/crashplan-4-7-0-installation-fix-for-synology-1435813200470_344-july-2016/

It did not work exactly like this for myself, after several retries I figured out I had to reboot my whole DSM before re-installing the new CrashPlan package and finally my backups where rolling again.  The down side to this is that you have to “adopt” your backup set again and if you have a large set this might take some time.  But you have no choose in the matter.  Some serious amount of time had passed since my CrashPlan stopped working until I sat down to find this information and put it to action as I work 45-50 hours per week and once home the priorities are not necessary to get my home IT setup working so by September my first backup set hand not even completed.  In fact I started noticing it getting allot slower as it got closer to the finish line.  82Kbps for my 6Mbps upload speed from my ISP.  This was starting to get very frustrating.

chrasplan01

Either the ISP was throttling traffic to the CrashPlan destination or CrashPlan Central was doing it.  My ISP confirms no throttling and as I said before CrashPlan support is not the greatest.  I explain to them my issue in great detail and the response I get is that perhaps I do not understand that my upload speed might be different to the download speed my ISP promises.  But they confirmed and promised there was no throttling of incoming traffic on their end.

Only thing I could guess was data duplication or compression being done on my end before being sent over the wire.  After searching for this kind of information credit is due to this guy who had spotted the configuration and how to override it.

http://networkrockstar.ca/2013/09/speeding-up-crashplan-backups/

It would have been great for CrashPlan support to point this out to me instead of insinuating that I was confused about upload vs download speeds provided by ISPs.

In short there is a file in your CrashPlan installation directly called my.service.xml that has this key

<dataDeDupAutoMaxFileSizeForWan>0</dataDeDupAutoMaxFileSizeForWan>

Stop your CrashPlan application, change this value to >1> and start it up again….

Booom….. Running on full upload speed now

chrasplan02

 

For Synolgy users this file is found in /volume1/@appstore/CrashPlan/conf/ as you would have done before to get the UI token to use with the workstation client ssh into the device, raise to super-users and go to the above directory.  Use VIM editor to edit the my.service.xml file (my preferred text editor Nano is not available on the DSM).  List of the basic VIM editors commands are found here.

http://www.radford.edu/~mhtay/CPSC120/VIM_Editor_Commands.htm

So finally my backup set has completed, I have even added more data to it that has also completed.  To this date I am unsure if I can actually recommend CrashPlan but want to give a big shout out to the Unofficial CrashPlan Community that works tirelessly to get this running on DSMs and headless NAS boxes.

Here is to hoping CrashPlan start supporting their application on NAS boxes in the future as I am sure many of their client base bought the software with that intended usage

 

Posted in Synology, Uncategorized | Tagged , , , , , , | Leave a comment

Speaking Of Time

Today I learned some interesting yet simple detail about time sources and the Windows Time Service .  After noticing that one of two Domain controllers in a little Development/Test domain we have was about 6 minutes out of sync I learned from a college of mine the best and quickest way to query the time sources and change it if needed.

In the Registry sub Key:  HKLM\Machine\System\CurrentControlSet\services\W32Time\Parameters there is a string called Type that can have the value of NT5DS or NTP.

reg2

NT5DS means the computer is part of a domain syncs its clock with the Domain controller that is the role holder of the PDC FSMO role, while NTP means the server syncs its time with an specified time source.  The NTP source is in a value within the same sub key called NtpServer.  Clients that sync with a PDC can still have a value in the NtpServer string but the PDC emulator will be the one they sync with if the Type is set to NT5DS.

reg1

In my scenario with the Domain controller that was out of sync, it had been configured to sync with and external time source but being on a segregated network with no internet access, after a few years the time on the server had drifted.

Another interesting topic around time sync is the option for VMware Virtual Machines to synchronise guest time with host.  I can see this put to use on Stand-Alone servers that need to be on a segmented network such a CDE network with no access to a NTP Server.  One point to mention though is that regardless of this option being used or not on the Virtual Machine, VMTools will sync the OS with the host clock when doing certain operations such as reverting a snapshot, power on and restart (via VMTools) so it is always important that your ESXi hosts have their NTP configurations correctly configured..

 

Posted in AD DS, ESXi, Time, vmware, Windows | Leave a comment

Microsoft Local Administrators Password Solution (LAPS )

I recently implemented the relatively newly available MS LAPS solution.  The implementation itself was surprisingly simple for such an effective and powerful offensive security solution.  Microsoft documentation on this piece is also very good compared with any standards available.

Implementing the solution sets a unique local administrator password on selected computer accounts in Active Directory and stores the password in a protected attribute on the object itself.

This is done by updating the Forest Schema with two new attributes for computer accounts called

ms-Mcs-AdmPwd – This stores the password in clear text

ms-Mcs-AdmPwdExpirationTime – This stores the date and time when to reset the password next .

A new ADMX file (AdmPwd.admx) need to be copied to the central ADMX file store, it enables you to configure new GPO settings to configure the password complexity settings, password length, age and what local administrators account to configure.  You don’t have to specify that name of the built in account even if you have renamed it for obscurity reasons as it will use the default SID-500

A Group Policy Client Side Extension (GPO CSE) is required on all client computer accounts that are to be configured for LAPS.

Following  on from that you will need to use the ADSI editor and edit the properties of the OU or OUs that hold targeted computer accounts and select the Advanced and remove the ‘All Extended rights’ for Security principals that should not have rights to see the passwords.  This is required because ‘All Extended Rights’ permission gives also permission to read confidential attributes.

Next is to give the computer accounts rights to edit this new attributes themselves. Give the SELF built-in account rights to edit this attribute. This is required so as the machine could update password and expiration time stamp of own managed local Administrator password.

Then give CONTROL_ACCESS permission on ms-Mcs-AdmPwd attribute of computer accounts to the group(s) that shall be allowed to read password of a managed local Administrator account on managed computers. Groups that are allowed are usually IT Helpdesk group and of course the Domain Admins group.

Write permission has to be added for ms-Mcs-AdmPwdExpirationTime attribute of computer accounts to the same groups that are to be allowed to force password reset for managed local Administrator account on managed computers. These are normally the same groups as above.

All the required components to do this come in a single MSI package, either x86 or x64 dependent on your architecture. Full install of it comes with the Group Policy Client Side Extension that all clients that are to be configured require, a Fat UI Client for admins and IT Helpdesk staff, Power Shell module and also the Group Policy Administrative template to upload to your central GPO store.

My implementation was to create an SCCM Application for the GPO CSE and roll it out in predefined stages as a required application.  A separate SCCM Application rolled out the full package with the Fat UI client and the PowerShell module to all workstations in our IT Workstations AD group.  Once all clients had the GPO CSE installed I proceeded with the rest of the implementation.

The Forest Schema is updated with a single line PowerShell command using the PowerShell module provided.  The operation needs a Schema Admin to undertake.

Once you have the PowerShell module installed you update the Scheme with this single command

Update-AdmPwdADSchema

Afterwards you should double check manually in the ADSI editor if any groups could have been previously given ‘All Extended rights’ on the OUs you are going to target.  In my case there where none.

The delegation of the computer account itself and rights to AD group to read the new attributes are also achievable with PowerShell and the provided module.

Set-AdmPwdComputerSelfPermission -OrgUnit “OU=Client Machines,DC=Contoso,DC=Local”

The above command gives the built in SELF object within the OU rights to update its new attributes, repeated this for each OU needed and any Sub-OU if you have removed inheritance in your OU structure.

Set-AdmPwdReadPasswordPermission -OrgUnit “OU=Client Machines,DC=Contoso,DC=Local”-AllowedPrincipals “Domain Admins”

The above command gives the Domain Admins AD group rights to read the new attribute therefore to see the password assigned to the local administrators account.  Repeat this to any AD groups that you have delegated some AD rights to such as IT Helpdesk.

Set-AdmPwdResetPasswordPermission -OrgUnit “OU=Client Machines,DC=Contoso,DC=Local”-AllowedPrincipals “Domain Admins”

The above command give the Domains AD group rights to change the ms-Mcs-AdmPwdExpirationTime attribute effect to change the next password reset time.

Now I configured the GPO and linked it to the OUs I wanted to target.

I use a SIEM solution for auditing purposes and when a password is read an event ID is created in the Security Log of the Domain Controller.

To enable auditing for the new Schema attributes I ran

Set-AdmPwdAuditing –OrgUnit: “OU=Client Machines,DC=Contoso,DC=Local” -AuditedPrincipals: Domain Admins

Repeat this command for any OUs that have and AD accounts that read the attribute.

When a password is successfully read, a 4662 event is logged in the Security log of the Domain Controller.

The new Fat UI client is created to query a computer from AD and only return the password and next reset time.  This is very handy for IT Helpdesk users who may not be comfortable with either PowerShell or using the attribute editor of a computer object in AD.  Remember only user with the rights to view and or edited these attributes can see them not matter what method they use.  The client can also be used to change the next time the password will automatically change.  Handy in case the IT Helpdesk would have to give a remote laptop user who forgot his password the local admin password to get that important file that’s located on its hard drive.

There are other ways to view the password such as the attribute editor of the computer account or PowerShell query which could be handy if a mass export was needed.  The LAPS client is by far the friendliest to use

$computername = IcemansPC

Get-ADComputer $computername -Properties ms-Mcs-AdmPwd | select name, ms-Mcs-AdmPwd

Security considerations

Password is protected by ACL

Protected in transit by LAPS tools

Password is not replicated to RODC and is not revealed in audit logs on a DC (you can edit the Filtered Attribute Set and replicate the attributes to RODCs if it is needed in your scenario)

MS full guide and MSI package is included here:

https://www.microsoft.com/en-us/download/details.aspx?id=46899

 

Posted in AD DS, LAPS, Security, Uncategorized | Leave a comment

VMware Tools

It is widely considered best practice to have your VMware Tools up to date and specially before upgrading the VMware hardware.  As even the latest build versions within ESXi 5.5 have updates to VMware Tools this can often become tedious to manage.  An easy way to automate this is to use the VMware Update manager.

If you have VMware Update manager already installed, Go to VMs and Templates view.  From there you can select one of your customized folders or select the cluster.  From there go to the Update manager tab, within that tab is the VMware Tools upgrade settings button.  It will bring up window as bellow

vCenter VMTools

Select all your VMs that you would like to have their VMware Tools upgraded at their next power cycle and vCenter will do it for you.

If you run mainly MS Windows, this comes very handy to plug in with your regular security patching restarts.

Posted in ESXi, vCenter, vmware | Tagged , , | Leave a comment

Remote PSSession and Scope Modifiers

I had a task to deploy a simple setup of DHCP service on a few branch office servers. I took it as a great opportunity to put my PowerShell scripting skills to test and add get some more experience with remote sessions and working with remote variables by using Scope Modifier.

The requirements where that each server had to be authorized in AD, have a small IP range scope, configure the clients to have a local DNS server as their preferable DNS as well as have two datacenters DNS servers configured (in the code sample they are 10.0.0.1 and 10.0.0.2) and the DNS domain name as a suffix

So we start with a CSV with 6 headers (Name, ScopeName, StartRange, EndRange, Description, DNS)

Name, name of server to configure
ScopeName, name of the scope to configure. This could be the branch or office name/code
StartRange, start of the IP range. Such as 192.168.1.51
EndRange, end of the IP range. Such as 192.168.1.60
Description, description for the scope. Such as “Subnet 1 in Office 1”
DNS, IP address of the local DNS server to configure the clients

The interesting thing when doing remote PSSession and variables is that you cannot take your local variables with you into your remote sessions.  You will have to call them using a method called Scope Modifier which is simply putting $using:variable1 to use a variable called $variable1, you have setup earlier.  There are other methods of getting local variables into your remote session but the Scope Modifier is in my opinion by far the most simple to use.  It however only works with PS v3 and above.

You can see this in action in the code bellow.

#Import your servers (Name, ScopeName, StartRange, EndRange, Description, DNS)
$Servers = Import-CSV "C:\CSVs\DHCP-Servers.csv"

foreach ($Server in $servers){

#As each server will have different variables they are loaded for each server
        $ServerName = $Server.name
        $ScopeName = $Server.ScopeName
        $StartRange = $Server.StartRange
        $EndRange = $Server.EndRange
        $Description = $Server.Description        
        $DNS = $Server.dns
        
        
        #This creates a remote PSSession to each server at a time
        $Session = New-PSSession -ComputerName $Servername
        
        Invoke-Command -Session $Session -ScriptBlock {
        
        Add-Windowsfeature -name DHCP -IncludeManagementTools
        
        Add-DHCPserverInDC -confirm
        
        Add-DhcpServerv4Scope -Name $Using:scopename -StartRange $Using:StartRange -EndRange $Using:EndRange -SubnetMask 255.255.255.0 -Description $Using:Description
        
        Set-DhcpServerv4OptionValue -Optionid 6 $Using:DNS, 10.0.0.1, 10.0.0.2
        
        Set-DhcpServerv4OptionValue -Optionid 15 mydomain.local}        

        }
Posted in Uncategorized | Tagged , , , | Leave a comment

GPOs

I had an interesting conversation the other day about GPOs and their precedence order.  This is often a confusing topic that lead me to review the topic on TechNet for my benefit.

GPOs are based on ‘whoever writes last wins‘ so GPOs that are processed later have precedence over GPOs that are processed first.

GPO links are applied in reverse sequence based on link order.  A GPO with Link Order 1 has highest precedence over other GPOs linked to that container.  To change the precedence of a link by moving each link up or down in the list.  Again, the link with the highest order (1 beign the highest order) has the higher precedence for a given site, domain or OU

Group Policy settings are processed in the following order:

  1. Local Group Policy object—Each computer has exactly one Group Policy object that is stored locally. This processes for both computer and user Group Policy processing.
  2. Site—Any GPOs that have been linked to the site that the computer belongs to are processed next. Processing is in the order that is specified by the administrator, on the Linked Group Policy Objects tab for the site in Group Policy Management Console (GPMC). The GPO with the lowest link order is processed last, and therefore has the highest precedence.
  3. Domain—Processing of multiple domain-linked GPOs is in the order specified by the administrator, on the Linked Group Policy Objects tab for the domain in GPMC. The GPO with the lowest link order is processed last, and therefore has the highest precedence.
  4. Organizational units—GPOs that are linked to the organizational unit that is highest in the Active Directory hierarchy are processed first, then GPOs that are linked to its child organizational unit, and so on. Finally, the GPOs that are linked to the organizational unit that contains the user or computer are processed.At the level of each organizational unit in the Active Directory hierarchy, one, many, or no GPOs can be linked. If several GPOs are linked to an organizational unit, their processing is in the order that is specified by the administrator, on the Linked Group Policy Objects tab for the organizational unit in GPMC. The GPO with the lowest link order is processed last, and therefore has the highest precedence.

You can block inheritance from a GPO for a domain or OU.  Using block inheritance prevents GPOs that are linked to higher sites, domains or OUs from being inherited and applied by the child-level.  By default child-level will inherit all GPOs from the parent.  By selecting the Enforced option on a GPO you specify that the settings in that GPO should take precedence over the settings of any child object.  Also GPOs that are Enforced cannot be blocked from the parent container with Block Inheritance.  Without using Enforce on a higher level GPO (higher in the hierarchy) settings of GPOs on a parent level are overwritten by settings in GPOs linked to child OUs, if the GPOs contain conflicting settings.

GPO links set to enforce (no override) cannot be blocked.

The enforce and block inheritance options should be used sparingly. Casual use of these advanced features complicates troubleshooting.

Loopback processing is an advanced GPO setting that is useful on computers in certain environments, such as classrooms or kiosks.  Setting the loopback causes the User configuration settings in GPOs that apply to the computer to be applied to every user logging on to that computer, instead of (in replace mode) or in addition to (in merge mode) the User Configuration settings of the user.  This allows you to ensure that a consistent set of policies is applied to any user logging on to a particular computer, regardless of their location in AD.

Loopback is controlled by the setting, User Group Policy loopback processing mode, which is located under Computer Configuration\Administrative Templates\System\Group Policy in GPMC.

Loopback can be set to Not configured, Enabled or Disabled.  In the Enabled state, loopback can be set to Merge or Replace.  In either case the user only receives user-related policy settings.

As you cant apply GPOs to the default Users and the default Computers containers (where new users and computers that have just joined the domain first go) you can use the Redirusr.exe and Redircomp.exe tools to redirect all newly created user and/or computer accounts to a different default location of your choosing.

Posted in AD DS, Uncategorized | Tagged , , , , | Leave a comment

Rebuilding ESXi 4.x servers that are already in a cluster

Found this good how-to article on rebuilding/upgrading ESXi servers that are already in an ESXi cluster. Good to review before you proceed with your upgrade.  The most important thing would be to disconnect or disable your HBAs before you upgrade!

http://pipe2text.com/?page_id=1578

Posted in Uncategorized | Tagged , , | Leave a comment