Part 1 of this series (click here) dealt with servers and the computers that host them, and how and why we keep them running at maximum performance. In this article, we continue on and discuss “virtualization” and the “physical hosts” that contain “virtual machines”, then we will discuss Data Structure and Storage. Let’s start with some basic definitions (there will be other definitions throughout this piece, formatted like the two below.
Virtualization & Virtual Machines: Virtualization in I.T. terminology means effectively creating “virtual machines (aka virtual computers)” by splitting the resources of a single computer into multiple sets of resources that function independently of each other. For example, we can take the resources of that single “machine” and divide them into three separate units, each functioning independently of each other, and we have three “virtual machines”, aka “VMs”. Why would we do this instead of buying three separate computers? Primarily the cost. It is much less expensive to buy a single large computer than three separate smaller ones. This savings is magnified by the fact that for some operations, the three VMs can share resources, plus the configuration can be changed as resource requirements change, meaning that the requirements for the three VMs is less than for three individual computers.
Physical Host: The computer that has been virtualized into two or more VMs is called a “physical host”.
VM Sizing: The first thing we look for relative to virtualization during the STR is the sizing of the VMs. In other words, have sufficient resources been allocated to each VM? If not, this can lead to poor performance or even server failure. We do this by observing the utilization of the resources and the performance of the applications. This is a high priority.
Warranties: Are the servers used in virtualization under current warranty? Having active warranties is essential when we need to obtain parts or speak with the manufacturer for support. If out of warranty, we determine if extended warranties are available. If not, it is often a good time to consider retiring the older machine in favor of a newer version. (If it has reached this state, it is probably pretty old.)
Hypervisor: The term “hypervisor” comes from the word “supervisor”. The hypervisor is software that runs (or supervises) the virtual machines on the host computer. This hypervisor software also creates the ability to simultaneously run multiple types of operating systems. For instance, with a hypervisor, you can run Windows, Linus and MacOS on a single physical host. Each VM could use any of them.
Operating System: The main program in a computer that controls the way the computer works and makes it possible for other programs to function. (Merriam Webster)
Operating System: Is the operating system being used licensed and is it still being supported by the manufacturer?
Software Licensing & Support: Again, we want to know that the licensing is up to-date-and support is still available.
Specifications: Have the VMs been appropriately sized for the applications that are being hosted on them. To determine this, we observe their utilization and performance. We are then able to determine what, if anything, needs to be improved. This is important because if the VMs do not have proper resources available, it can seriously affect performance.
Monitoring & Management: We believe all servers need to be monitored and managed to ensure that patches and anti-virus software are always up-to-date. As you can imagine, in today’s cyber-crime environment, this is highly important.
Administrator Accounts: We want to see that all administrators have their own dedicated accounts and that each has a unique login. When accounts are shared, it is often impossible to determine who was responsible for errors or omissions, hence, corrective action and retraining becomes more difficult.
Web Security: We want to see that Internet security has been properly installed and is up-to-date. This includes items such as Internet Explorer Enhanced Security, OpenDNS, and browsing policies. Servers should never be used for accessing the Internet except for legitimate admin purposes such as downloading new software or updates.
DNS (Domain Name Server): An often-used analogy to explain the Domain Name System is that it serves as the “phone book” for the Internet by translating human-friendly computer hostnames into IP addresses. For example, the domain name www.example.com translates to the IP addresses 93.184.216.119 (IPV4) and 2606:2800:220:6d:26bf:1447:1097:aa7 (IPV6). Unlike a phone book, DNS can be quickly updated, allowing a service's location on the network to change without affecting the end users, who continue to use the same host name. Users take advantage of this when they use meaningful Uniform Resource Locators (URLs), and email addresses without having to know how the computer actually locates the services. (Wikipedia) (Note: IPV4 and IPV6 are the protocols under which IP addresses are created.) Think of it this way: would you rather email your friend Joe by sending the message to his email address: Joe@hiscompany.com, or by sending the message to his email address: 2606:2800:220:6d:26bf:1447:1097:aa7? Try remembering a few of those!
OpenDNS: OpenDNS is a company and service which extends the Domain Naming System (DNS) by adding features such as phishing protection and optional content filtering in addition to DNS lookup, if its DNS servers are used.
The company hosts a cloud computing security product suite, Umbrella, designed to protect enterprise customers from malware, botnets, phishing, and targeted online attacks. The OpenDNS Global Network processes an estimated 100 billion DNS queries daily from 85 million users through 25 data centers worldwide. The company was acquired by Cisco in 2015 for $635 million in cash. (Wikipedia)
There are a number of other items for which we check that are the same or similar to those covered in Part 1 of this series: Redundant Power Supply, UPS, Hardware Naming & Labeling, so see Part 1 for this information.
Structure – Location: We check to make sure all data storage is properly configured, meaning they are centrally located and each location has been properly named. This is important for proper backup and becomes important in the event of recovery. Instead of digging around and trying to figure out where sets of data are, or where they belong, everything is in its proper place and easy to locate and/or restore.
Structure – Shares: We are always concerned with permission management. In other words, we want to make sure that the right people have access to the data they need…but only the data they need. We check to see that data is segregated into company-wide, department-wide, and individual-only compartmentalization. This way, everyone in the company from the CEO to the newest entry-level employee has access to all the information they need to do their jobs, but only that data.
Structure – Permissions: Once we have determined that data is properly segregated, we check to make sure that permissions are properly structured and disseminated so that effective data protection and allowed sharing is achieved. Every company must decide who has access to what data. This varies greatly in importance depending on the industry. For example, organizations that work with personal financial information, or health records, are under strong legal obligations. While the legal obligations may not be there for other companies, they may still have sensitive customer information, or internal trade secrets, or their own financial data that should not be available to all who work there.
Storage – Data Location: All too many companies, often unknowingly, have end-users (employees) whose data is stored locally rather than on a server. This means that the data is stored on the desktop, and, thereby, subject to loss in the event of a hard-drive crash or virus. If the desktop computer goes bad, files, often many years of work, can be lost forever, since these files have not been backed up on a server. We have seen situations where people assumed their data was being backed up, only to discover too late that it was not. As an alternative, individual work stations can be backed up, but this is a “second best” solution.
Storage – Configuration: We check to see if the RAID configuration is appropriate. RAID is in place for redundancy of file storage and different situations require different redundancy levels. We want to make sure that the level we find works for that business and, if not, make adjustments.
RAID (redundant array of independent disks; originally called redundant array of inexpensive disks) is a way of storing the same data in different places on multiple hard disks to protect that data in the case of a drive failure.
Part 3 of this series will cover hardware such as desktops and laptops, “thin clients”, and mobile devices. In addition to this series of articles, you can find a wealth of I.T. knowledge and information at www.DynaSis.com/the-latest, where we have posted White Papers, Articles, Case Studies and Blog Posts on a wide variety of technology related subjects designed for the C-Level executives of small to mid-sized companies. For more technology information that we find and share, follow us on Twitter (@DynaSisIT) and LinkedIn (DynaSis Technologies).