Version locking is the practice of using defense-in-depth security strategy on a fixed-version of business-critical software in order to run the same version for an indefinite period of time. The software continues to run in a protected environment performing a narrowly defined task.
Let's examine an example to make the previous description more tangible. This is a real-world example from past experience. A client has an older system which runs critical parts of their ERP (enterprise resource planning) system. The operating system has been end-of-support for 10 years and the application is now six versions behind. The application vendor will only support the application on an expensive consulting arrangement because of its age.
The client has been unwilling to upgrade due to the fact that their software vendor now uses a “subscription” based license. Their old version was bought and paid for straight up. The new version would cost them significantly. There is also the cost of upgrading the servers and licensing a new operating system. The client wouldn't care, normally, but their company has been purchased by a fortune- 500 corporation, and they now have Sarbanes-Oxley requirements to fulfill. This means that the mother-company's security policies are now incumbent on the client.
The client is not prepared to make the financial outlay required to migrate to a newer ERP system or upgrade their old one. They also don't have the time or the resources to do it before the next audit. So, what options do they have? Version locking is ideal in this situation.
Knowing that the OS has several vulnerabilities that don't relate to the software, the client pursues a version locking strategy. They employ a firewall to mitigate access to the vulnerable OS. They setup a “jump server” to access their application and don't allow any connections directly to the system other than from key-authenticated admin users. The application itself also has an older known-vulnerability. So, the client disables that portion of the software so it can no longer be used. They weren't doing so anyway.
When audit-time rolls around each item which would have violated their security policy has been mitigated. The auditors find that the system now meets or exceeds the corporate policy. Thus, the company saves seven figures and a lot of hassle, including a possibly failed audit. This is the kind of situation where version locking succeeds brilliantly.
What are the goals of version locking? Well, they break down in a list like this, ordered by importance.
Goals of Version Locking
The costs for upgrading software aren't always clear. Some folks will only consider the actual cost of the software. They may fail to consider the cost of new hardware, a new operating system with new OS fully licensed, new storage costs (which will be larger, almost certainly), and the potentially enormous cost of migrating the data and metadata (like DB triggers functions) to the new system. Version locking avoids all that cost and leaves you with a system that's already familiar. So, no retraining is needed.
Not every type of software can be version locked. In some cases the software itself will have security problems. In other case, the software will require vulnerable OS features that can no longer be patched.
In some cases, regulatory changes or changes in business dynamics will force an upgrade. However, wouldn't it be nice if those were the only things that forced an upgrade, rather than a greedy software vendor that wants to put you on an expensive subscription treadmill?
In order to try to formulate a good version-locking strategy, one needs to adequately understand the function that the software performs. One of the key inputs to forming a successful version locking strategy is to know what network-facing services you can disable. This reduces your attack surface. In order to fully picture the threats, you need information about what they actually are.
This need for information should drive your initial host audit. Find out as many relevant details as you can. What OS do they run, the version, the network configuration, the open TCP and UDP ports, additional hardware requirements like serial ports, the accessibility of the console, and other facts are all needed to make a good plan.
Your audit goal should be the complete understanding of the system's security state, network configuration, and software requirements. You don't need full software requirements, just the ones that impact security or degrade your audit stance.
Creating a formal audit-document that records your results will also prove to be an advantage come audit-time. The outside auditors will be able to use your work as a reference. If they can see you did a thorough job, they are more likely to sign off on their own audit results because their confidence is higher. If you were an inspector looking for asbestos insulation, who would you be more likely to trust, the guy who shrugs and says “I don't know where it is or if we even have asbestos.” or would you rather deal with someone who hands you a list of all the places they know about and exactly when & how they plan to mitigate or remove the asbestos? Sure, you'd still do the inspection in both cases, but one seems a lot easier than the other, not to mention being proactive and cooperative will help.
Audit These Yourself
The most important potential show stopper item will be the application itself. Since we aren't going to upgrade it, it must not contain any critical vulnerability that we cannot mitigate. For example, if the application has a vulnerability that involves printing, but you don't need to print anything anyway, then that portion of functionality may be disabled. Doing so increases your safety without having to impact your functionality.
Knowing what kind of resources the application uses will also help you form a plan. Applications with modest needs and no special hardware might be good candidates for virtualization, especially if the only require a single TCP or UDP port for communication.
Knowing the OS and all its potential security problems is also a key consideration. However, the threats for any operating system are generally highly dependent on the revision and patch level. A good place to start is the CVE (Common Vulnerabilities & Exposures) database. Then check to see if the OS vendor has their own, separ list. Be sure to record these items in your audit. Start factoring in these findings so that you'll be prepared to pick the most effective ring fencing strategy (described later).
Applications can also have specific vulnerabilities. They can be more difficult since some vendors don't provide a complete list. Thankfully, most vendors do have some security information online these days. If you do find vulnerabilities pay close attention to their context. For example, if in order to exploit the issue one must use some optional functionality in the application, it might be possible to simply disable that feature rather than biting the bullet for the upgrade.
Knowing the audit standards is also critical to passing audit. We'll cover each audit type separately and give you some tips on how to ready yourself. However, most of the audit standards are surprisingly short and readable. You don't need to read the entire law or the legalese that doesn't pertain to your job.
In some companies internal policies are guidelines; in other places they are more like law. That is to say, companies place different levels of importance on rule-following. However, that doesn't change the level of rule following that auditors will expect or accept. When it comes to most audit standards, the companies own policies are given special importance. So, much importance, in fact, that not following a company policy is grounds for you failing an audit. Remember auditors are mostly interested in the rules you make for yourself. Auditing to see if you are following them is simply operational. In the mind of an auditor, the rule and the rule-maker are much more important than if Joe Admin is following rule XYZ. After all, a rule violation can be an isolated incident that can be corrected through some compliance action. An under-functioning or critically non-existent rule breaks the whole auditing system.
You will want to understand the physical connections going in and out of your server. This is critical because you can't mitigate what you don't know about, and these are difficult to discover without a physical inspection. For example, how will you know if you are using any old serial devices like modems, scanners, or serial printers if you never look for them? The same is true for all kinds of special equipment from fiber channel adapters to radio transmitters. There is only one way to do this properly - go look. If you can't do it, ask someone at the site to do it for you and take video.
Logical communication is also an important item to understand. Who does this system connect to? What networks are they on? What ports are they using? What about file sharing and printing; what IP addresses are being used for that? One easy way to do this is to take a look at NetFlow data from your upstream switch. Not all companies will collect and/or use NetFlow data, though. So, refer to the next section for additional strategies.
Distribution media is a minor issue, but it might impact your plans in a major way. For example what if you plan to re-install the application on a newer operating system, but then you find out that you can't get your hands on the original media. Not everything can simply be compressed into an archive and moved around. Sometimes software will do tricky, complicated, or unexpected things as part of its install process. So, if you make plans to move your old systems or applications around, just be sure you can actually do it. Having original media provides one part to that puzzle.
Your backups are also a critical part of your planning strategy. How good is your system? You should consider if the backups are fully restorable and what would be required to actually perform the restore. For example, at PARSEC we often see folks using 20 year old tapes with loads of media errors on them. In other cases, we'll see clients who have backups but only one day of retention that then gets overwritten. Ideally you want enough data retention that you have several fully restorable images as well as a backup accessible enough to do what's needed. For example, we often see clients who have NetBackup or Tivoli eschew system backups and bootable tapes. This is a mistake. Its one thing to use enterprise backup solutions for data, but it's foolish to think the backup will help you much with a bare metal recovery. You'll need to specifically consider that scenario to feel free enough to experiment or theorize a version locking solution.
In this context, what is meant by an auditing tool is anything that will help you understand the system configuration and may relate to an audit item. There are several tools that one might find useful for this task, but they aren't available on all platforms. Generally speaking, having an extra Linux or Windows host around temporarily for this can be helpful.
Probably the most important item to consider is your penetration testing suite. In general terms this is a network port scanner application that feeds potential vulnerabilities to some sort of checker. The checker portion of your pen-testing suite should tell you if potential problems are actual problems. These types of tools are fairly dangerous to use for bulk scanning because they could potentially crash the system. However, if you are using a penetration testing suite in a controlled context against one machine, it's not as risky. You can set aside time so that folks know the system may be down, you can research any known-issues with your platform, and finally you can make sure you have valid backups before getting started.
There are several penetration testing suites available. Some are commercial and others are open source. Some are simply port-scanners, such as nmap, others are comprehensive suites that cover as many vulnerabilities as possible with actual penetration tests yielding a pass/fail result such as Nessus. Still other penetration test frameworks consist of bootable OS media (DVD or USB) that contain a constellation of auditing tools, but no comprehensive scanning tool. Your needs may vary; so choose accordingly. The more your own level of technical sophistication, the more down in the weeds you'll want to get with individual tools.
If you have unlimited resources, you may also consider hiring a penetration testing firm like Qualys or Beyond Trust to help you build your version locking plan by pointing out all vulnerabilities that you need to mitigate. Their results will be more trusted by an auditor because they are a third party and have known-good procedures.
In practical terms, you'll want to set aside downtime or at least system notifications that you are performing a penetration scan. This does two things. First, it keeps you out of trouble with your management and operational folks running the application. Second, it shows progress and intent. As we all know, a lot of what happens or doesn't happen in a large company is governed by perception. If your users perceive you to have a comprehensive plan, then see you executing parts of it, you will find doors opening for you later on without as much hassle or explaining. Your plan makes sense, it's going to save everyone hassle and save the company money. Why should they bother you? Most change for the sake of change is resisted internally. You are trying to preserve something that works, not trying to introduce something that's new and potentially drags down folks doing the work with unexpected change. You may find that setting very visible and auditable dates and times lubricates the process more than you expected.
Let's take a moment to discuss “methods and sources” for auditing tools. First, let's examine some different options in each class of penetration scanner.
Choosing a vulnerability scanner is up to you. You'll need to evaluate your needs based on what you are scanning. Overall, I would recommend Nessus for folks who are having trouble choosing. It's a general purpose tool that covers most use cases. Nessus also produces some nice reports that are suitable as timestamped “artifacts” (physical evidence that can be easily duplicated) for auditors. It can give you a good before and after view as well.
Most vulnerability scanners can scan entire subnet ranges of hosts at a time. While this might be useful to some, it's often detrimental if you are trying to audit one host. The reason is that these scanners often break normal services. For example, I once worked at a company where the tape backup robotic silos failed every Friday at 11AM. It didn't take long to discover that the security team was doing bulk scanning every week at that time to refresh their threat database. Blanket scanning with a vulnerability scanner is slightly dangerous, because you never know how the hosts will respond to an “almost hack”. Some of the plugins in some scanners have a mode where they can actually try live exploit code. This might be good for people who want to be able to make security reports with high confidence. However, some of those plugins will crash the host when they are tuned to be that aggressive.
Once you have chosen a scanner or scanners, it's time to being your work. You'll have to use your best judgment about how to tune the scanner. Your appetite for risk will determine where to start.
What I'd recommend is to start with an unimportant machine on a private network. That way if you accidentally go too far with your scanner settings you can dial it down to where it needs to be before you try it on your production system.
Most scanners have some feature or function to save a report. It's usually quite helpful to keep your before and after results. It shows due diligence to the auditor.
You can bet on the scanner producing at least some false positives. It's very rare that all the issues the scanner finds will be exploitable or dangerous. In most situations it's about a 60%:40% ham to spam ratio. So, you need to examine the results comprehensively. Common causes of false positives are services which run on non-standard ports. Those ports may belong to other unrelated services that have actual vulnerabilities. You'll have to check that out. There is no quick or easy way to filter for false positives. In other cases, you'll get results that are invalid because of the context. For example, a service which is only vulnerable if run by a privileged user, but you aren't doing that.
It also doesn't hurt to use more than one scanner. What I usually do is to use Nessus then follow it up with an nmap scan using fairly slow timing. That way I get Nessus's full analysis plus the concise summary offered by nmap.
First off, ask yourself “what are my controls?” A real auditor does care about problems and vulnerabilities but they generally aren't IT security people, they are financial auditors. They only have a superficial interest in the nuance of computer hackery.
Controls which are supposed to be in place are their primary focus. They will evaluate the control on a couple of different axis. The first axis is effectiveness, the second is compliance. In other words, they want to make sure your controls work theoretically and practically.
Having emphasized the auditor's focus on controls, your focus should be on understanding what your controls are. I'm sorry this is going to be bad news for a lot of us, but you have to actually go read the corporate policy. Trust me; your auditors will already have done so.
First and foremost, you will be held to the standards you and your peers in IT have directly written. What it's absolutely not is a contest to see who is the smartest hacker. If that's your impression of the audit process, you'll be uncomfortably disabused of that notion your first time. It isn't so much about “real” security, as it is about documentation and evidence.
So, for each vulnerability that you've determined is potentially valid put on your auditor hat and ask several questions.
Audit Questions about Vulnerabilities
That is the kind of thought process that you'll want use yourself if you want to take the same approach as an auditor. The high-level process order is to define, observe, analyze, and report. Then you refactor what you need to pass and start the process over again.
There are a large number of auditing standards. I can't afford the time to cover all the ones you might encounter. However, keep in mind that auditing mindsets rarely change. Though these are the most common handful of auditing targets you'll encounter, having an understanding of their procedural mindset is helpful in a wide range of similar situations.
The Sarbanes-Oxley Act of 2002 or “SOX” is a law which was passed to create stricter accounting standards inside of publicly traded corporations. It was passed in the wake of the Enron and WorldCom accounting scandals that a lot of Americans lost savings in. It's controversial, and the debate is partisan. So, let's please not engage with that. We are here to discuss version locking strategies for use with SOX.
Most of the text of SOX itself deals with financial disclosures. It's legalese and if you do read it, skip to section 404. This is where the law discusses IT controls. Remember that the point of SOX is to create accountability in financial disclosures.
Now, let's discuss section 404. It's the most costly and, some say, the most complicated part of the law. It requires the company create what is called an “internal control report”. This is where IT controls will be documented and detailed.
The main concern of SOX in section 404 is actually fraud. That's the whole point of SOX overall. So, your controls will likely be centered around preventing fraud. After all, OS and application security problems represent a pathway to fraud. Thus, controls are tuned that way.
SOX mandates that an external auditing firm conduct an independent audit. Each of these firms, such as Price Waterhouse Cooper, or Ernest and Young will have slightly different definitions around what IT standards they expect you to conform to. However, simply leaving open security holes because “we have to!” isn't a good idea.
Just because some software is part of a critical business process doesn't excuse it from being a potential entry point for fraud. However, you do have some room to negotiate. Most of the time, the auditors are willing to work with you if your plan to mitigate the risks are clear and documented. For example, they might initially insist that you patch your OS for a vulnerability, but relent when you prove that you have the issue mitigated by disabling the vulnerable service or blocking it with a firewall. Both patching and firewalling mitigate the risk of fraud, and thus meet with both the letter and the spirit of the SOX law.
Just as with any other auditing standard, use the same methodology. The key is to know your current policies and how they can be tweaked. Another tip is to simply be ready for the auditors when they come. These people are expensive and often you are working with a team of auditors (for a fortune 500 sized company). If you have to do a lot of “wait! wait! I can fix that!” hand-waving, they aren't going to see you as nearly as credible. If you've already documented your vulnerabilities, updated your policies, and built your ring fenced environment, then you'll save the company money and yourself the time of too much explaining. The auditors would much rather examine your results and review your methods than haggle with you in a conference room ad-hoc.
HIPAA stands for the Health Insurance Portability and Accountability Act. Its aim is different than SOX, in that its main aims are to protect the privacy and security of individual health care data. This data is called protected health information or “PHI”. In an IT context it's called “EPHI” and the E is for electronic.
The key point to remember about HIPAA is that you need to keep folks without a legitimate need-to-know from accessing EPHI. Obviously, an OS or application that's got unfixed vulnerabilities isn't protecting the information adequately.
HIPAA requirements are spelled out in greater detail by the law and the subsequent litigation. There are two critical “rule” sections, the privacy rule and the security rule.
The privacy rule is just something to be aware of. It tells you when you can disclose versus when you must hide any PHI. It's something you should be aware of if you work within a HIPAA environment anyway. If you aren't familiar with this rule, become so. Otherwise the rest of the auditing goals won't make a lot of sense.
The real meat is in the security rule. There are very specific items you need to be aware of to pass your audit. We'll switch to list form because the requirements are numerous.
Requirements for HIPAA Security Rule
As you can see, HIPAA is much more specific than SOX. Its goals are also different. SOX wants to prevent fraud, and HIPAA wants to prevent unauthorized folks getting a hold of health care records and/or altering them.
Again, with HIPAA, you need to start by reviewing your current policies then see what you can handle simply by changing the policy. Sometimes policies are too specific or to prescriptive. Changing policy can be much easier than upgrading or changing a system. It's true that writing policy happens independently of systems engineering. However, both should be done with an informed view of the other. They can make each other more efficient and easier, but if you let one get fossilized while they other moves ahead, and you will have serious problems.
Once you are done fiddling with your policies, you can start deciding what technical changes you need to ring-fence your version-locked environment. With HIPPA, you won't have to use much imagination to know if you are compliant. Just ask yourself one question and you'll have the gist of it: “Does anything about my environment allow people without need-to-know to access my server(s)?”
Incidentally, there are two common misconceptions with HIPAA that some who read the law infer. One is that you need so-called “encryption at rest” and the other is that you need transport encryption internally. Both are false. HIPAA has physical security requirements which basically state that your network and your data center need physical security, access control, and other items. Since that rule is non-negotiable, that obviates the need for encrypting data which is “at rest”. Nobody should be accessing that data if you've followed the other rules. There is no need for transport encryption on your internal switches, again because of the physical security you've already got in place. As an IT guy, you'll be happy to know that the physical security side of things is probably not your problem. That's generally handled by physical security or facilities groups within large companies.
The PCI standard is basically security for merchants who process credit cards. It's also often called PCI-DSS and stands for Payment Card Industry Data Security Standards. PCI is not a government or regulatory standard. It's something that credit card companies require as a block. They all get together in something called the PCI-SSC, which is a private standards body that makes the rules on behalf of the credit card companies. It started in December 2004. If you operate in Minnesota, Nevada, and Washington state laws may compel some compliance.
The main goal of PCI is to prevent unauthorized disclosure of credit card numbers and expiration dates. It's very detailed and gets pretty specific about IT practices which are expected to be adhered to. Compliance is pretty easy to compel, since busting your PCI audit means you'll lose your merchant status with the credit card company.
The PCI standard uses one term a lot. That term is “CDE”. Unfortunately, that doesn't stand for common desktop environment. That might be fun. No, CDE in this context stands for Cardholder Data Environments. That translates into “stuff you are on the hook to protect”.
Your level of effort to comply with PCI will be determined by the so-called compliance level you are obligated to meet. Two items will determine your level (levels span 1 to 4). The first is your volume of credit card transactions. The second item is where you process the cards. Online card processing is more risky than physical in-person card processing.
PCI is still policy driven, but much less so than the other standards we've discussed. PCI is more of a set of standards and directives than it is a policy framework. In other words it says what you must do and how you must do it in many cases. It's probably the most prescriptive standard we are looking at. Like SOX, it requires an external auditor. As of 2008, it also mandates actual penetration testing and scanning (although in some cases you can do it yourself). The scanner will look beyond simple vulnerabilities and also must check for potential DDoS (flooding) attacks. Thus, your own efforts to scan and correct issues you find ahead of time are going to be critical. It'll be too late to hand-wave once the audit team scans your system and finds problems.
The PCI standard takes a pretty dim view of using 802.11 wireless technologies without taking serious steps to secure the environment. It's also something that PCI auditors will absolutely scrutinize. Don't plan on getting away with open access points or WEP encryption. Those will get a failing result quickly.
When dealing with PCI audits, first find out what your PCI level will need to be. The folks who push PCI have lots of charts and tables to help you decide if you don't already know. After that, you'll want to find out if you are able to do your own penetration scanning, or if your company uses a third party. If an outside firm is involved, find out if there are any previous reports from previous iterations of auditing. Solve any outstanding issues first, then go after any new issue that might be a problem. Always do your own scan before the auditors show up, in any case.
The PCI SSP governance group has a lot of resources showing what types of scanners they support and which groups of pen-testers or auditors they will honor.
The GLBA acronym stands for Gramm, Leach, Bliley Act. It is active federal law, like SOX. Comparing it to the other standards we've looked at put it's somewhere between SOX and HIPAA. Like SOX, the GLBA targets financial institutions. However, instead of being setup to prevent fraud on financial disclosures to the SEC, it's geared toward preventing unauthorized access or loss of financial information much like HIPAA wants to protect EPHI. That and other finance-related things that aren't germane to our discussion, at least.
In the GLBA there is something called the “Safeguards Rule”. This is where IT professionals will want to focus. It says that a financial institution handling sensitive customer financial information (accounts, names, social security numbers, transaction details, etc..) will need to do the following four things
GLBA Safeguard Requirements
So far, the GLBA is probably the weakest standard with the fewest real requirements for IT folks. However, like all of them, it requires you have a policy and a plan. So, pretty much all the same factors procedures apply.
Now we come to the technical part of this discussion. What do you do to create a secure environment for something that's got unpatchable problems? It's possible! The key fact that we're attempting to capitalize on is that most audit standards don't tell you exactly how you must solve security problems. They merely require that you have a policy that offers a solution and that it protects the company in the real world. Often times, this can easily be accomplished through a version locking strategy, rather than an expensive migration and upgrade or outright replacement.
One of the primary methods used in all security scenarios is that of so called “defense in depth”. Using a military analogy is generally the easiest.
If you want to secure an aircraft carrier, you first have to consider the threats. You could be sunk by a torpedo, shot with large guns, hit by aircraft carrying bombs, rockets, or guns, sabotaged, nuked, rammed by a large vessel, or hit with a cruise missile. There are probably a few other things I'm not considering. The bottom line is there is no one single way to defend the carrier. So, how do they do it?
Well, you'd have destroyers out all around the aircraft carrier looking for submarines all the time. You might even have a hunter submarine following it around. You'd have anti-air guns and surface to air missiles for defending against planes. Cruise missiles can be shot down with high cyclic-rate robotic guns like the Phalanx CIWS. However, no single weapon can keep the carrier safe. It has to be all the different defensive strategies operating together.
That's the exact same idea needed to secure a computer system. You must not choose a single security measure. Instead try to deploy as many as you can without causing too much hassle on the users. For example, if you are securing an old Oracle server that talks to clients over SQLnet, then there is no need to run services like chargen or time. Disabling those unneeded services is basic IT security, but it becomes much more critical when you are version locking the system & application.
Auditors are often trained in basic IT security. They know that a multi-axis approach to security is better and more serious than a single effort. Not only that, but if you face a penetration test, that extra security will be an impediment to the vulnerability scans, giving you a better chance to pass.
The primary ring-fencing strategy is definitely the use of a firewall. Most big companies will have a network or security team that manages them. So, you'll want to check. They aren't going to be friendly if you stand up your own firewall and it might even be against company policy. So, your first step is to understand the corporate network topology and get familiar with their vendor biases. I've never worked anywhere that the network administrators would want to be called heterogeneous. That is to say, most places like to stick with the brand of network gear they favor. So, a quick way to cause corporate civil war is to bring a Checkpoint firewall into a shop that uses Cisco ASA firewalls, for example. I won't talk about firewall brands other than to say that there are also plenty of very nice free firewall packages available if you need a free solution.
Before you resort to a firewall, though, there are a couple of considerations. First of all, it's always more secure to simply disable a service rather than firewall it. After all, the firewall only masks the issues well enough to pass audit, it doesn't really erase them.
Another possible consideration is if the platform OS in question has its own firewall subsystem. It's easier to use a built-in firewall rather than deploy a discrete external firewall. However, if you are ring fencing several servers at once which are all on the same subnet, then a discrete firewall is easier to manage than a bunch of individual rulesets on several machines.
The first bit of information you need to plan your firewall updates is a list of open ports on your server(s). You need to also know what each numeric port represents in terms of the service it offers. For example, we know that telnet on TCP port 23 is a very dangerous service that allows usernames and passwords to be passed in cleartext. However, TCP port 22 is for OpenSSH and uses encryption. So, we may choose to disable telnet, while creating an access control list for the SSH service.
Firewalls are going to offer you two critical ways to add security. First, they can be used to limit traffic to originate only from trusted IP addresses or networks. In some cases, we can use the firewall to create a filter with no members, thus blocking services off completely. This is generally referred to as an access control list (ACL) or “filter” at the network level.
The other firewall feature of interest to us in this case is traffic normalization. This means that the device can disassemble traffic in OSI layer 2 through layer 4. If there are anomalies in the frames, packets, or segments the firewall will correct them. These types of anomalies can lead to exploits. Some older systems have kernel or network bugs that can't be patched and can be exploited to gain access to the system or cause it to choke or crash. Good examples of this are Winnuke for Windows hosts and Latierra for Unix and VMS hosts. Normalizing traffic prevents these kinds of exposures.
If you find vulnerabilities that you can simply mask off or severely limit with your firewall, that's certainly going to be easy to document and understand. You'll also prevent vulnerability scanners from finding the service at all. Fewer questions lead to faster and more successful audits.
IDS systems come in several flavors. Typically, they are divided between host and network based tools. Host based tools run directly on the potential victim-hosts and network based IDS systems or NIDS run on a discrete network attached device.
First, let's discuss a host-based approach. Tripwire and OSSEC are both good examples. They generally create checksums of all your binaries (at least) or sometimes all files of a certain type (ie.. “everything owned by Oracle”). If those change, then they are reported and you can take action. The fact that so-called HIDS systems create copious reports is an advantage. You can use this fact to craft policies.
For example, if a certain possible class of attack would require the attacker to alter or remove some system configuration file (many do) then the HIDS system will report that change. If it's a Unix host, we know that if /etc/passwd or /etc/shadow change is significant. If it is OpenVMS, we know that if someone messes with the sysuaf.dat file, we are going to be interested in why. Being able to catch these types of operations makes it much easier to craft a reasonable security policy. HIDS might be the answer to the auditor's questions: “How do you know when users are added to the system?” and “How can you tell if someone's falsified that?”
NIDS systems are standalone network based solutions. They usually play a different role than a host-based system. The way they work is to use binary signatures applied to network traffic looking for suspicious patterns. They are generally plugged into some kind of network monitoring equipment so they can see all the traffic on a given network broadcast domain.
NIDS systems are going to generate reports showing all types of suspicious network activity. They will usually detect everything from someone running a remote exploit against a server to suspicious connection patterns that might indicate an flooding attack is in progress. The reporting aspect of NIDS systems is the most neglected. While they almost all generate reports, some have better systems than others for clearing, assigning, and following up on those issues. If you are a decision maker on what kind of NIDS system to deploy don't get caught up in the “who has the most signatures” game. It's just as important to make sure the reporting is usable for your purposes. Good reporting means it's easier to make policy. Auditors need reporting artifacts to prove you are doing your due diligence. Your word alone won't go nearly as far and won't work at all in most cases.
Lastly, many NIDS systems have a dangerous but interesting feature that allows them to actually intercept and destroy network connections during an attack. This stems from the fact that they can always see both sides of a network conversation. Thus, they can use the inside knowledge of IP sequence numbers to forge TCP RST segments. The simply forge an RST (reset/disconnect) to both sides and the connection terminates. Thus, in the middle of an attack that requires a two-way conversation, this “active shootdown” feature is able to kill the connection before the attack can succeed. So, in a worst-case scenario where you need to run a service that has known vulnerabilities, you could choose to firewall the service and choke down access to only a handful of clients, then put a NIDS device on the segment to catch and destroy any hack attempts. That gives you two layers of defense.
Proxy servers have been around a long time. Almost any type of network service can be setup with a proxy. All that really means is that another server is acting as a place for clients to connect initially, then it brokers requests between the client and server, ostensibly doing some security checks in the process. A good example of a web proxy server is Squid. Your browser talks only to the Squid proxy, and it actually does all the external interaction with remote web servers. Firewalls often have some proxy services built in, such as SOCKS proxies that can proxy almost all TCP traffic.
Proxy servers can be used to mitigate interaction directly with old or insecure services. You can use a firewall to insure that only the proxy server is able to directly interact with the vulnerable service. If exploiting that service requires a direct connection, the proxy server will likely thwart that line of attack.
Proxy servers can also be used to normalize network traffic. If an attack requires that the network data be altered or adulterated in a non-standard or protocol-illegal way, then a proxy will prevent it. In some cases, older systems will also simply work better with a proxy. This can stem from the fact that clients don't always do graceful disconnect operations and can leave server network resources such as receive buffers isolated or exhausted. Proxy servers are well-behaved and always disconnect from the server side gracefully.
A bastion host is basically a specialized proxy server. Some people call these “jump” servers. The idea is that you logon to the bastion first, then you can jump off to the actual target server.
Bastion hosts are a pretty draconian way to secure a system. A firewall is usually a better option. There are specific drawbacks to a bastion host mostly because it's never going to be a direct connection.
There are some situations that cannot be solved any other way, though. If you work in an environment where all traffic has to be directly connected (ie. not switched or routed) unless it's encrypted, you might be facing a similar situation.
For example, say you have a service like Telnet or SQLnet that doesn't give you any option for encryption. If the host OS is too old or specialized it might not be able to run something like Secure Shell. Instead you could opt to use a bastion host running OpenSSH as a front end. Then use a crossover cable to connect it to the vulnerable system. If you have physical security in place, that's going to mean nobody can access the system without encryption. They'd be forced to use the bastion host.
This can solve some otherwise unsolvable problems. However, I'd caution you to use it only as a last resort where other solutions won't suffice. Users generally hate bastion hosts and the lack of direct connectivity makes moving files to the target host a real pain.
Virtualization can sometimes provide you with extra security. This is mostly because it's easy to build a virtual network around a vulnerable system, and then ring fence it with a firewall, proxy, or bastion.
Virtualization itself isn't really a security feature. However, it does provide features that have security implications. Aside from using it as an easy way to build an inside & outside network around the guest server, it can also be used to monitor performance, control physical access, or easily take copy-on-write based backups.
Writing policies is usually quite boring but fairly easy. If your company has a tech-writer or specialist who does most of the document cleanup, check with them first. They may have a template or standard format they use. You'll also need to find out where the official location for policy documents is within the company.
For a policy to be considered valid by auditors, it needs to be accessible to anyone in the company. Some auditing standards like SOX require that to be the case. Next find out if there is any formal process to get the policy put into force. If there is an existing policy hopefully you have access to the original author, or that author is you. If not, keep in mind you might have to make some changes. That's normal, and if you are trying to show some technical leadership, take the initiative and either re-write or radically update what needs updating.
Yes, you might be questioned a little more by the auditors for extensive policy updates and edits. However, keep in mind that the auditor's goal is not to sign off on something that's going to get you and him/her in trouble. So, if they see a lot of effort on your part then you can be assured that they will feel more comfortable, not less. It signals you are taking the process seriously and trying to craft a realistic policy you can abide by operationally.
Make sure your management is kept appraised of your efforts. Updating policies in a vacuum is a terrible idea that will almost certainly lead to frustration or failure. Also, editing policies that you don't have to follow yourself is poor form. If you control something you don't have to be responsible for it's going to end badly for the poor person who actually is responsible. If someone makes you responsible for something you don't control, that's usually not a recipe for success. So many companies do not recognize this and have some “policy wonk” write up all the policies. This is an awful strategy.
Some situations just don't lend themselves well to version locking. In these situations you will do yourself more hurt than help and waste energy better spent on migration or upgrade. Before that happens, let's take a look at some notable examples and hopefully save someone, somewhere the trouble of trying.
In some cases, you will have a legacy application or host on a network which is hostile and there no way to mitigate it. For example, say you have some webservers that face the Internet. Perhaps they are the corporate web servers for company information. You can't simply firewall off the world from that port. If the webserver you are using has vulnerabilities you are going to have to upgrade or disable the service. There is no way to ring fence the service because it has to be open to the world.
In other cases you may have servers which talk to your target machine on multiple ports in difficult to predict ways (ie. random or numerous ports). It may be very difficult to firewall because the rules you need aren't clear or only some of the hosts on the subnet can be defended with a firewall. This is a dicey situation that might not lend itself well to version locking. If the server requires too much open-ended access, that makes it tough to defend.
Sometimes legacy servers are connected to more than just a network. For example, industrial control systems that have serial connections to machinery or automated tools that they interface with or operate directly. These types of situations can also be hard to version lock due to their intransigence. It becomes even more difficult when you interact with serial-over-IP hardware or other bridging scenarios.
If the hardware is so old or so interconnected that refactoring the configuration is difficult or impossible, you may be forced into a situation that's difficult to version lock. On the upside, this usually isn't the case because the hardware that matters the most is the network interface. Since these are fairly generic and mostly easy to upgrade or refactor discretely.
There are also cases where the application may need to broadcast information to other hosts on the network. If this is the case, you are faced with a problem. You can firewall or physically isolate the entire broadcast domain. That works, but that's often not possible because those hosts caught up in the new configuration may not be able to tolerate the firewall being between them and other corporate network resources (like file servers).
Thus, if you have a legacy system that does a lot of network broadcasting, it's probably not a good candidate for version locking unless you can isolate the entire network segment along with it. Then you can get into a situation where the other systems behind the ring fence pose too great a risk to the version locked system if they have a lot of unmanaged users etc.. So, this situation gets complicated quickly, and often isn't ideal.