Primary operating system vendors are in business to make money. That doesn't always put customer concerns like system longevity in focus. Vendors often want to deprecate their operating system before their customer base is ready to upgrade. They are incentivized to do this in order to create an upgrade and thus revenue cycle that their investors can predict.
However, from a customer perspective there are many situations in which the client would like to version-lock an environment and leave it alone. This might sound silly initially, but consider a few examples. What if a client has a multi-million dollar manufacturing operation and the computer system was just one part of it. Any significant changes introduce severe risk in the environment and could even impact production. However, even in such an environment, one might have a regulatory requirement to keep systems patched.
It might seem silly, but not everyone agrees on what constitutes a patch. There are various parts of the operating system landscape to update, and not all of them go into mainstream patches. Let's breakdown the difference between an operating system patch affecting a user-space program, the kernel, or a bit of hardware.
Some, but not all vendors will bifurcate their patches into two categories: security and stability. A stability patch would typically fix an issue discovered by users or their own lab-techs.
An example of this would be a patch addressing a system crash under specific conditions. Another would be a patch fixing a severe performance issue that only happens when the system is under load. These type of patches might impact the functionality or usability of the system but do not impact the security posture of the system. They fix problems.
Security patches are typically only used to address security issues which are found after the initial release of the operating system. These are the most critical patches to apply when put under any type of regulatory framework. Why? Because you could fail an audit, otherwise. This is (usually) a bigger deal than an instability bug that you may or may not be running into.
Never. No way. We are not legally able to do this and we can only touch vendor patches when the customer has clear entitlements with the vendor. At that point, we are just consultants who do your OEM patching for you, but you've paid for the original entitlement. We absolutely will never download a patch from a vendor and “cheat” by putting it on a customer system. It's immoral and illegal.
We only deliver patches which we've developed in-house with our own independent methods.
Customers pay hourly consulting fees to pay for the hotfix or patch they want/need. The rate is determined by talking to your salesperson.
We can also make recommendations for any patches based on outstanding CVE reports and quote you on those patches only so you can present that as a regulatory audit artifact showing that your system is still secure and up to date on patches specifically for security issues.
Some hardware devices need their own internal code to function. Think of it as a mini-operating-system just for that special device. This is called firmware. It's a grey-area between hardware and software, but it's definitely more soft. It often lives on an NVRAM storage for the device.
You typically find firmware needed for devices like SCSI HBAs, Fibre Channel HBAs, network interfaces, and of course motherboards also have their own firmware which folks will call a “BIOS” or newer systems with EFI (extended firmware interface).
Firmware images are much different than OS patches. They don't really touch the operating system at all. They only install on the specific devices they are used for.
Most of the time, vendors either do not package up firmware updates, or they do so by relying on the external vendor to provide some kind of tools and firmware images. In many cases, one can find the OEM's firmware and simply apply it. For older Sun systems, this is often the case. However, for many IBM POWER systems, there are some cases where one can only use the firmware updates that IBM packages.
Firmware is definitely not something that PARSEC can be in the business of patching. Where possible and completely legal, we can package any firmware updates that come from the hardware OEM which are documented to be legally allowed for distribution. They can be embedded in the same package format we use (EPM - Enterprise Package Manager). This is the same thing the big vendors do. They just *stop* doing it after the OS becomes end-of-support or end-of-life (EOS & EOL).
Also keep in mind that vendors tend to stop upgrading firmware after 1-3 years from the release date. This is because they generally feel pretty confident and stable about the code as their bug reports slow down and die off. By the time you'd want to sign a contract with PARSEC, you'd probably be well out of this period. So, to be fair and compare apples to apples, the OEM vendor isn't going to give you firmware updates beyond a certain point either, even despite having the legal means to do so.
The kernel is the core of most operating systems. It describes only the most important and highly used parts of the OS. These parts are typically protected within security contexts that make any vulnerability a severe one.
The question arises, how can a third party company like PARSEC create a kernel patch when we do not have access to the source code for the OS? The answer is simple: we cannot.
Having said that, consider the fact that most operating systems experience kernel patches in the first year or two of their lifespans. After that, the bugs are typically found via normal usage and the everyday use of the OS for intended purposes.
PARSEC has a product offering to address the issue of patching for version locked customers. We develop, test, and ship a completely separate patch lifeline for each of the Unix operating systems we support. Currently, we have no option for OpenVMS.
These are absolutely positively not patches from the operating system vendor. Shipping or including the OS vendor's patches would be a huge mistake for PARSEC as it's completely illegal and other companies have crashed and burned for no other reason that passing along these vendor patches. These are patches that PARSEC consultants create specifically based on known vulnerabilities.
We select patches based on the OS specific entries in the CVE database. We'll also create specific stability patches where possible. We also specifically patch and update Secure Shell based on security revisions.
Secure Shell is definitely the big item to get updates for. You can't have people using that service for a denial-of-service attack or privilege escalation. Of course, looking at an operating system like Sun Solaris 8 or HP Tru64, we see extremely old and vulnerable versions. So, PARSEC Patches generally replace the entire subsystem, porting your host-keys over where possible. However, rather than ending up with SunSSH (a crappy OpenSSH port) you get real, honest-to-goodness OpenSSH from the 6.x or 7.x series which is far past all the old serious Secure Shell hacks.
Any CVE vulnerability published for the operating system that doesn't have a vendor patch, we will patch. We will also patch when there are issues with OpenSSH or significant feature updates (OpenSSH 7.x has a lot of these most people haven't had to contend with yet, since they are on older versions). Last, we'll patch when there are significant performance or stability issues in subsystems we have code-access to, for example Sendmail or Inetd.
PARSEC doesn't have the complete source code to most of the operating systems we support. Where we do have source code, we aren't allowed to use it to compile new binaries. So, for things like firmware or deep kernel patches, we cannot issue patches. Keep in mind these types of patches are usually covered by the vendor before the end of the operating system support happens. However, in cases were there is no other way to fix an issue other than issuing a kernel patch, we can't attack the issue directly.
Usually there is some other way to prevent exploitation of the attack. For example, in the AIX *lquerylv* exploit, we create a binary patch. This way, we never have to ship any IBM software. We simply disable the feature which is exploited. This isn't as ideal as fixing the issue directly in the code, of course. However, keeping any eye on regulatory compliance, which is more important some obscure feature few folks ever even used or were aware of, or your companies positive regulatory compliance? The fix, thus addresses the needs of folks wanting to version-lock while staying compliant with their own security policies.
When there are CVE's which we cannot address from any angle because, for example, it'd require a deep kernel patch to fix, we will attempt a workaround instead. However, if there is no possible workaround we will note this and notify any customer who is part of our program that we are not able to address it and the implications of whatever the security issue causes. This allows for administrators and site security administrators to make informed decisions and either implement a workaround if possible or deprecate the server if needed.
Most of the time, OS vendors do not write basic system software if they don't have to. This is one of the major draws to Unix and was an especially powerful attraction in the 1980's when writing your own Unix-like OS from scratch was only for the very biggest corporations (IBM writing AIX from scratch was a rare event). Why re-write things like Inetd, Sendmail, or Secure Shell? They already exist in the world and most of the time all the OS vendor does is simply to customize the software settings to their liking and re-release the. They want to be free to focus on parts they feel are value-adds for their product, not on basic functionality that should already be baked in. This fact gives PARSEC the ability to get the original source distribution for many such items and use them for the basis of a patch.
A good example is AIX 5.3. The OS had twelve major updates or “technology levels” as IBM calls them. The official lifespan of the OS was two years.
Let's look at the actual track record. During that two year period, we can examine the history of security problems using a tool from cvedetails.com showing the incidence of security problems for this OS. CVEs for AIX
It takes some hand waving to explain the results. You have to remember that AIX 5.1 came out long before 5.3 and this is the reason why you see exploits and issues going back before 5.3's release date. They are rolling issues into 5.3 that really were inherited from AIX 5.1. We can see that the majority of issues happened during the operational years of the OS.
It's also clear that some vulnerabilities surfaced long after IBM decided to quit patching. For example, there is an lquerylv command execution vulnerability that emerged in 2015. This is after the end-of-support date for AIX 5.3 and guess what? IBM didn't patch it and probably never will. When we examine the associated IBM APAR bug issue. PARSEC has a byte-patch available for the issue, but IBM only shipped a new binary for AIX versions 6 & 7.
Bottom line is that if you were a PARSEC Patch customer, you could keep using your AIX 5.3 system indefinitely and still be 100% above board for your regulatory compliance and patching.
In the event of a remote root exploit or a remotely exploitable issue in either OpenSSH or OpenSSL, we will release patches earlier. This is analogous to a hotfix process. Any patches released off-cycle will be rolled into that quarter's patch bundle.
The EPM tool is an open source package management system written by Michael Sweet. All package systems for Unix are basically the same. I'm not saying all of them are equal, I'm just saying they all do the same functions. That is that they either add, remove, track, or modify packages. EPM is no different.
I first ran into EPM while working at Oracle. They used it for out-of-band package management of all the internal kruft Oracle likes to install on their hosted server systems. It worked quite well for them.
The EPM packages allow us to use pre-install, post-install, pre-remove, and post-remove package scripts. It also allows us to individually bundle packages or setup dependency trees. So, it's a very fully featured package management tool.
One might ask, why not just use the native tools that the OS vendor packages with the operating system itself? Well, that's fair. Most of those tools are usable and releasing vendor-native packages does have some appeal. However, we reasoned that using EPM was actually better because it doesn't alter the local package registry at all. It exists out-of-band with it and stays completely independent. The main situation to consider here is if a vendor decided to release a patch for something they previously said they wouldn't touch. In that situation, we can easily back out our patch and install the vendor version. Keep in mind that'd be an extremely rare situation that we haven't yet seen, though.
Ultimately, it's just easier to keep the system's package registry pristine and allow EPM packages to keep their own package registry separately. This is done actually quite simply in EPM. There is a directory called /etc/software which contains a removal script for every installed package.
If a customer specifically requests a patch be in native format, we can easily create that, also.
Let's face it, many times patching is driven by the need to comply with some type of regulatory standard. These standards often regulate when patches are installed, what procedures must be taken before and after, and what type of patches are required. However, in other cases, the standards are vague and complex. Lawyers seem to be able to generate much more difficult code to parse than programmers.
However, in most cases one simply needs to have a plan for patching and technology updates. If your plan is to version_locking_legacy_environments, then
In almost all cases, the regulatory standards try to be broad and use a lot of catch all language that empowers the auditor. Most standards are most insistent that you exercise “controls” (policies and procedures) to deal with IT security. They basically allow you to construct your own “cage” and live in it. The biggest mistake most companies make is that they create very strict standards (well, the security team does) and they document those standards. However, after that they take no action to actually enable and enforce the standards. When the auditor shows up, they find that the company has not been practicing any of it's supposed standards. The company would have been better off establishing a lower bar they could actually reach.
Of course, that is not to say that the various regulatory standards have no floor on what they will allow you to put into policy and practice. Most have very vague requirements that can cause a lot of discussion and controversy as to if *your* policy is meeting the spirit and letter of the governance. So, in order to further clarify, allow me to illuminate some of the common standards and what they actually require in further sections.
Patching frequency is almost always dependent on what your local security policy requires. If you become a PARSEC Patch customer, you should make sure your policy doesn't require patching your legacy systems at greater than quarterly frequency (with exceptions for hot fixes). Otherwise, you could easily get out of compliance with your own policy. This is one easy way auditors will ding you. So, make sure to work with your own compliance folks to document what you really do instead of what you think you should do. Otherwise, the auditors may find that the two don't overlap well enough.
U.S. Congress gave us the Gramm-Leach-Bliley Act (GLBA), also called the Financial Services Modernization Act of 1999. This is a law intended to govern (among other things) the release of sensitive financial information. The law limits when a financial institution can disclose a customer's personal information known as “NPI” (non-public personal information).
The real meat of the GLBA text is called The Safeguards Rule and this is where IT folks should concentrate.
You can read the text easily (it's not too terrible legalese). However, let me summarize it.
GLBA IT Requirements
As you can see these are pretty broad requirements. However, we know that security threats of any consequence are given a CVE entry. If new threats emerge and we choose not to address them, then we are out of compliance with the GLBA, pure and simple. From this perspective, the GLBA requires you to apply patches and to maintain the ring-fence around the customer NPI data.
Encryption usually plays a big role in GLBA security controls. Thus, you are going to need to be free of things like the SSL Heartbleed problem disclosed in 2014. Anything that could be used to compromise your encryption's operational security must be fixed and updated.
The requirements in HIPPA are designed to protect individuals from information disclosure happening at the institutional level by folks like hospitals and insurance companies (“payers and providers” to use the industry term).
Anyone who handles Protected Health Information (PHI) is probably already governed by HIPPA. However, what does that mean for IT professionals who manage servers in the healthcare industry? Well, the part of HIPPA we need to pay all the attention to in that case is called The Security Rule. It is very similar to the GLBA requirements.
What The Security Rule says is actually pretty complex. However, the spirit of the law is that you must do everything reasonable to prevent PHI disclosure and you must maintain a clean IT infrastructure free from known-problems. The actual way it breaks down these requirements is into essentially three categories: physical safeguards, administrative safeguards, and technical safeguards. Physical safeguards are things like locks (and their proper use). Administrative safeguards maps pretty nicely to the phrase “need to know”.
Technical safeguards are the IT stuff you and I care about. The text of HIPPA doesn't say exactly what kind of solutions you have to implement, but it does say what those solutions have to be capable of from a security standpoint.
HIPPA IT Requirements in the Security Rule
As you can see, HIPPA is vague but only in ways that advantage the auditors.
Sarbanes-Oxley is a law which is part of the US Title 15 code governing financial disclosures. It's a complex law, but in a nutshell it tries to make sure that the information that investors and the US Security and Exchange Commission is accurate, free from tampering/fraud, and keeps the folks doing the disclosing on the hook for the accuracy of the information. The idea is that this prevents crooked companies from lying about their profit and loss to scam their clients or investors.
SOX is extremely vague and this creates headaches. The law to read is, US Title 15, Chapter 98, Subchapter IV (ugh, I feel like a laywer). This has a section called Enhanced review of periodic disclosures by issuers and this is the part you want to read concerning IT rules. Unfortunately, their requirements are much more vague. However, again, I will provide a summary.
Sarbanes-Oxley IT 404 Requirements
It's a painful read and I'd recommend checking out the SOX For IT Pros guide by SANS. It will help you decode the requirements for SOX. Do they require patches? Well yes of course, otherwise an insecure system could not be considered a secure source of financial information. It's easily falsifiable if it's on an insecure computer. However, this is all mostly extrapolation because the text of the law itself doesn't require anything really specific from your IT security.
The Payment Card Industry (PCI) standards aren't law, they are well accepted standards used by credit card companies to govern their merchants and customers. In other words, if your company processes credit cards, you're going to be subject to PCI standards or risk losing your merchant status.
PCI standards are actually very detailed but there are different “levels” of compliance needed for different folks. It all basically comes down to how many credit cards you process on a regular basis.
PCI can get pretty complex and specific. However, all levels of PCI have the same basic spirit. You can't do anything that might put folks credit card info at risk. That includes not only their numbers, but also their transaction history. Start with the FAQ and you can dig more into specific questions for different levels of PCI.
Without going into the *many* specific requirements of the PCI standard, let's get a few things straight. Any system that's going to store or process credit card info has to be kept secure. Keeping it secure means patching, having a security policy, proving you execute on a regular basis, etc..
PCI standards also get very specific about the usage of encryption. Since many security problems impact SSL, SSH, or other encryption tools and protocols the code-path in those tools must be kept secure.
Just for fun, let's watch a few PARSEC patches in action and have a quick discussion about the actual mechanics of applying the patches. There is only so much legalese and policy discussions one can have before wanting to see something more concrete.
Here is an example on a Tru64 system. We have an ancient and highly customized version of Secure Shell which is now a dead branch of SSH. We need to get the system upgraded to a more modern version of Secure Shell. PARSEC has a package which will do this automatically.
EPM packages can be put into native system format (setld format is the package format in Tru64) or they can be setup as “native” or “portable” packages. This is the preference we'll use to keep from polluting the local package repo. In cases where the customer requests native packages, we can easily provide them as EPM will generate multiple package formats from the same metadata definition file. The “portable” format just generates an installable shell archive: very handy.
Upgrading Secure Shell on Tru64
$ sudo ./openssh.install Copyright 1999-2017 by Michael R Sweet, All Rights Reserved. This installation script will install the OpenBSD Secure Shell software version 7.9 on your system. Do you wish to continue? yes Copyright (c) 1982, 1986, 1990, 1991, 1993 The Regents of the University of California. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. All advertising materials mentioning features or use of this software must display the following acknowledgement: This product includes software developed by the University of California, Berkeley and its contributors. 4. Neither the name of the University nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. [...] Do you agree with the terms of this license? yes Backing up old versions of shared files to be installed... Installing software... INFO: Running Tru64 OpenSSH upgrade procedure, preserving host keys where possible. WARNING: Host keys may have changed. Please update any clients. Updating file permissions... Installation is complete.
As you can see, this is a somewhat interactive process. You can simply add the word “now” to the command line and this will bypass the interactive agreement and just do the installation quietly.
Sendmail is another bit of software which comes from the open source world, but just about all vendors have packaged at some point or another. Problems with Sendmail are going to be patchable because the software comes from folks who share the code. So, we will not provide the vendor specific version or any fixes from the vendor, but we will provide the open source version and fixes from the project. The latter is almost always going to be newer and better at addressing security concerns. The vendors typically don't update until something forces them to.
$ cd epm/tru64-5.1-alpha $ sudo ./sendmail.install Copyright 1999-2017 by Michael R Sweet, All Rights Reserved. This installation script will install the Sendmail software version 8.15.2 on your system. Do you wish to continue? yes SENDMAIL LICENSE The following license terms and conditions apply, unless a redistribution agreement or other license is obtained from Proofpoint, Inc., 892 Ross Street, Sunnyvale, CA, 94089, USA, or by electronic mail at email@example.com. License Terms: Use, Modification and Redistribution (including distribution of any modified or derived work) in source and binary forms is permitted only if each of the following conditions is met: 1. Redistributions qualify as "freeware" or "Open Source Software" under one of the following terms: (a) Redistributions are made at no charge beyond the reasonable cost of materials and delivery. (b) Redistributions are accompanied by a copy of the Source Code or by an irrevocable offer to provide a copy of the Source Code for up to three years at the cost of materials and delivery. Such redistributions must allow further use, modification, and redistribution of the Source Code under substantially the same terms as this license. For the purposes of redistribution "Source Code" means the complete compilable and linkable source code of sendmail and associated libraries and utilities in the sendmail distribution including all modifications. 2. Redistributions of Source Code must retain the copyright notices as they appear in each Source Code file, these license terms, and the disclaimer/limitation of liability set forth as paragraph 6 below. 3. Redistributions in binary form must reproduce the Copyright Notice, these license terms, and the disclaimer/limitation of liability set forth as paragraph 6 below, in the documentation and/or other materials provided with the distribution. For the purposes of binary distribution the "Copyright Notice" refers to the following language: "Copyright (c) 1998-2014 Proofpoint, Inc. All rights reserved." 4. Neither the name of Proofpoint, Inc. nor the University of California nor names of their contributors may be used to endorse or promote products derived from this software without specific prior written permission. The name "sendmail" is a trademark of Proofpoint, Inc. 5. All redistributions must comply with the conditions imposed by the University of California on certain embedded code, which copyright Notice and conditions for redistribution are as follows: (a) Copyright (c) 1988, 1993 The Regents of the University of California. All rights reserved. (b) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: (i) Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. (ii) Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. (iii) Neither the name of the University nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. 6. Disclaimer/Limitation of Liability: THIS SOFTWARE IS PROVIDED BY SENDMAIL, INC. AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL SENDMAIL, INC., THE REGENTS OF THE UNIVERSITY OF CALIFORNIA OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Do you agree with the terms of this license? yes Backing up old versions of shared files to be installed... INFO: Stopping legacy sendmail instance before install Installing software... INFO: Starting new sendmail instance, succeeded. Updating file permissions... Installation is complete.
So, this is a MAJOR upgrade for Tru64 since it takes the mail server up two major revisions and dozens of minor ones. Since it's a big source of exploits and trouble, upgrading Sendmail on a live system is a must if it has to face the Internet.
Keep in mind that there is a heckuva lot more software that can be patched and updated on just about all Unix variants. Also, almost all the services that one might be required to run locally like Secure Shell, SMTP, Inetd, and other network services are things the vendor got from the open source world. Those are also the big risk as far as an attack surface. Fix those, and you can maintain your system for a very long time, if not forever.
There is also one huge advantage for version-lockers that we haven't discussed. Since most of the exploits and issues are found when a OS is in it's mainstream lifespan, few issues pop up after the OS goes end-of-life. Sure, a showstopper issue can put you in a really bad spot, but not if you have PARSEC backing you up and creating patches for those issues. However, once the OS becomes “old” fewer attackers spend time trying to hack it. It also means the software gets a lot of testing by users before the end of service life. Thus, fewer issues come up over time. This means fewer patch cycles and less hassle.