Sales: 678.967.3854
Support: 866.252.6363

With the “Great Recession” more than six years old and corporate hiring still not picking up substantially, many experts are suggesting that the current unemployment situation has become the new normal. When the economic crisis hit, companies and their employees had to learn to do more with less, and in most cases this meant working more efficiently. Leading the way was information technology (IT), which has been a key driver of productivity increases since the 1990s.

To reduce expenses, Corporate America not only expected existing employees to make better use of the technology that was available to them, but it also leveraged IT to perform certain tasks that previously had been handled by humans. Effectively, technology enabled companies to become leaner and more competitive, and it’s now evident that they intend to stay that way. As a result, most of these firms simply do not need the same number of employees that they had before the recession began.

This shift had a positive effect on the bottom lines of these tech-savvy firms, but not all businesses reaped such gains. In some cases, companies that were not attuned to technology before the Great Recession tried to cut expenses by trimming technology spending, and in doing so they missed out on one of the greatest periods of productivity growth in the world’s history.

The good news is that it’s not too late to put technology to work as a productivity driver. Such advances as remote IT management and software as a service (where companies lease the right to use software that is hosted in the cloud) make it very affordable for even the least technically oriented firm to catch up with the IT leaders.

If your company isn’t leveraging the full potential of technology, I urge you to explore some of these solutions, soon. Proactive adoption of technology is the new paradigm, and your competitors may already be a step ahead of you.

Many companies, especially those with dispersed workforces or client bases, have discovered the economic benefits of video conferencing, also called telepresence. Those that haven’t might want to consider the environmental benefit of this valuable business tool, as well. Although each company’s reduction in carbon footprint from video conferencing might not be much (especially if it is a small or medium-sized business), cumulatively, the effect is profound.

A report commissioned by the Carbon Disclosure Project (CDP) and sponsored by AT&T provides solid support for the benefits of this technology. The study, which included an in-depth analysis of 15 corporations, each with four telepresence-enabled conference rooms, developed benefit scenarios for the business community as a whole that are pretty impressive. Specifically:

Add to these benefits the proven economic value of video conferencing, and the argument in favor of this tool becomes even more compelling:

Furthermore, not requiring employees to travel reduces stress, increases productivity and improves work-life balance.

Here’s the best news: telepresence systems have become affordable to implement. Companies that deploy them save money, have happier employees and have a great environmental story to share with their clients. For a demonstration of how telepresence can work for your firm, give me a call at 678.218.1769.

Many service providers talk about “best practices” and “excellence” in IT Service Management. However, few introduce to their customers (or adhere to, themselves) a documented framework for best-practices planning, delivery and support of IT services. Officially called the IT Infrastructure Library (ITIL®), this widely accepted framework is a cohesive yet constantly evolving set of best practices for Service Management. To ensure it is both complete and current, the ITIL draws on the experience and expertise of private and public sector companies that have successfully achieved the goals for excellence outlined in the framework.

How It Works

Rooted in the move from mainframe computers to distributed computing and dispersed resources, ITIL helps companies break down the IT silos that tend to arise in organizations and ensure consistent application of processes for technology delivery and support. For every new and existing business service, ITIL promotes the importance of a Service Lifecycle to drive organizational effectiveness and efficiency through predictable service levels. (For the purposes of this model, a business service is defined as any operation that provides value to users and is supported by technology services and infrastructure.)

  1. Service Strategy—Understanding who the IT customers are, the offerings needed to meet their needs, and the capabilities and resources required to develop and execute these offerings successfully.
  2. Service Design—Ensuring that the design of new and modified services and their related processes cost effectively meet customer expectations.
  3. Service Transition—Successfully building, testing and implementing the processes necessary to ensure the customer enjoys the desired value from the service. This phase of the lifecycle also addresses change management, validation and testing, and other important variables.
  4. Service Operation—Delivering the service and ensuring its health on an ongoing basis, including minimizing disruptions, detecting problematic trends, and otherwise managing service access.
  5. Continual Service Improvement (CSI)—An overarching concept rather than a separate step, CSI provides the mechanism for an IT team to measure and improve upon the efficiency and effectiveness of service delivery and process execution.

ITIL offers enormous benefit to companies that adopt it, because it fosters consistent, repeatable processes that can be measured and continually fine-tuned, not only to maintain excellence, but also to adapt to changing customer needs, emerging business opportunities and challenges and other important factors.

Mega-corporations, manufacturers and technology giants such as IBM, HP, Wal-Mart, Sony, Pfizer, Boeing and Citi have embraced ITIL and wouldn’t operate for a single day without it. More importantly for small and medium-sized businesses (SMBs), any business working with a qualified IT partner can implement the ITIL model without excessive expense or difficulty.

DynaSis fully supports and adheres to ITIL, not only as a tool for fostering excellence in other SMBs, but also in the daily practice of our own operations. We have helped many of our clients develop and execute best-practices IT solutions using these guidelines.

Ready to get started? I’d love to tell you more about what ITIL can do for you, or schedule your company for a DynaSis Technology Assessment, which is the first step toward achieving the business service excellence of your dreams.

For many small to medium-sized business (SMB) owners, disaster recovery and business continuity (DR/BC) are nebulous concepts to be dealt with "when there is time." The problem for many (more than 50%) is the "right" time never comes, leaving them unprepared when disaster strikes. Yet, in the past year, many SMBs are realizing that disasters can hit anywhere, and they are realizing that they cannot put off planning forever.

Although preparing a DR/BC plan is admittedly not a "no-brainer" process, it doesn't have to take hundreds of hours to complete. Perhaps the most important part of this effort—and something you can do without developing bulky manuals and detailed schematics—is determining your "magic numbers" and then taking action to ensure you can meet them.

Three numbers—Recovery Time Objective (RTO), Recovery Point Objective (RPO) and Maximum Tolerable Outage (MTO) will give you a good idea how quickly you need to recover your business—from critical client and decision-making data to core business processes—to ensure your firm doesn't collapse after the dust of a disaster settles. Once you know this information, you'll be in a better position to plan an effective recovery.

Recovery Time Objective (RTO): The minimum time within which you would like to restore your data, applications and critical IT-related processes after an outage.

Recovery Point Objective (RPO): The amount of recent data you could tolerate losing in the event of an outage—which equates to the frequency of your backup snapshots.

Maximum Tolerable Outage (MTO): The longest amount of time your business and its employees could function without access to data, email and applications before the outage puts your business and/or client relationships at risk.

Calculating your RTO, RPO and MTO require you to run a business impact analysis, identify processes that must be operable for you to function, and evaluate the strength of your client relationships (and their tolerance for outages). You'll also need to investigate your vendors and supply chains to see what their disaster plans are and whether you have alternate choices. DynaSis recently published a practical guide to help you in this quest; click here to view our white paper on the topic.

However, you can calculate a rough approximation of your RTO, RPO and MTO through simple visioning exercises. Make a list of clients you could not afford to lose, then estimate their tolerance for a service outage (you know how patient your big clients are). Consider the type of work you do, and decide whether or not it requires instant access to recent data and if employees could perform that work remotely.

Then, evaluate whether your crucial data—like email and client information (contacts /contracts/RFPs, etc.)—Is stored in the cloud or only at your location. Finally, consider whether your business processes can be completed remotely, and whether employees are cross-trained sufficiently that some could step in if others were tied up with disaster crises.

Many companies that make these rough projections are surprised to discover that their tolerance for outages and disruption is very low. They also realize that even if their physical location is not functional, they could maintain client relationships and operate at a minimal level—provided they had access to their information.

If this is the case with your calculation, I invite you to call me. We can perform an in-depth risk analysis that pinpoints your vulnerabilities and makes recommendations for improvement. Given that more than two-thirds of SMBs are located in areas prone to disasters, and the frequency and impact of disasters is increasing (per the National Oceanic and Atmospheric Administration), disaster recovery for most companies is no longer a matter of "if." It's a matter of "when."

Your network runs pretty well most of the time. Sometimes, it seems a little slow, but usually, it works well enough to keep business humming along reasonably smoothly. Your network is in good shape, right?

If this sounds like your business, think again. Nine out of 10 corporate networks have some type of problem, from security flaws to improperly configured devices that slow down network speeds. Network hardware has become so adept at resolving or bypassing conflicts and other glitches that one or two problems alone might not cause an outage.

But when the right combination of issues occurs, BAM! Down goes your network. If you have myriad problems, untangling them all and finding the source of the outage can take hours—or days.

Fortunately, there is a technique that can identify problems with your network, hopefully before they take your network down or compromise its security and that of your systems and business assets. It’s called a network assessment, and it’s a service that can be run as an automated process, in the background, with little to no impact on your users.

So, what does the network assessment tool do? It tunnels through your network, identifies and creates an inventory of all its connected devices, and scans for anomalies. Problems it will identify include:

Network assessments gauge the security and composition of your network, enabling a comparison against your current expectations and objectives for the future. Additionally, this information is valuable for more than simply exploring your network and its ecosystem. It also helps you determine if users have installed software outside of company licensing agreements, pinpoint devices where users are not following password reset policies and more.

Network assessments usually include a collection of reports (ours do) that explain what the evaluation uncovered and make recommendations for improvements. With those in hand, you can decide to take action internally or hire outside specialists to alleviate pressure points and reconfigure questionable or misconfigured devices.

Network assessments aren’t designed to evaluate network health (e.g. service availability; application performance). That’s a separate operation that works in tandem with the network assessment to ensure your network is 100% optimized to power your business. When you’re ready to learn more, we’d love to hear from you.

Technology is a benefit that’s wonderful and infuriating at the same time. The more dependent we become on it, the more we are negatively affected when it fails us. Unfortunately, that problem isn’t going anyway any time soon. The only way to resolve it is to minimize the potential for system disruption.

So, how can you best go about that? According to a 2009 study performed by research firm IDC, “Well-targeted upgrades, couple with a rigorous program to standardize and improve IT practices, can deliver substantial risk reduction and could reduce total annual outage risk by as much as 85%, in some cases, with downtime reduced from an average of over 2 hours per month to less than 45 minutes.”

If you’re an IT executive or IT-engaged business owner, you likely already know this. You also know how hard it is to manage your system upgrades and patches while trying to implement sound, standardized IT practices among company personnel.

In fact, if you’re an IT professional working for an SMB, you’re probably already spending more than 50% of your time on tedious, painstaking maintenance tasks, and still not getting to them all. (A 2010 study, “The State of IT Systems Management,” indicates that 67% of IT pros spend 50-74% of their time on maintenance.) You probably don’t have time to teach anyone, anything.

You may also be aware that there are great managed service providers that can give you end-to-end system management 24/7/365. These providers use automation tools to deeply scan your systems continuously, providing a level of detailed inspection that would cost a fortune if a person tried to do it (assuming it was even possible).

The best of these providers have a “command center” that lets them act on any signs of trouble immediately, no matter when it occurs. They also handle all your patches and updates, and in general keep everything running smoothly. (Hint: we’re one of them.) If you’ve never heard of these types of solutions, I’d be happy to tell you more—just give me a call.

If you’re aware of them, but budgetary restrictions or management resistance prevent you from exploring them, here’s a bit of ammunition: The IDC study mentioned earlier found that SMBs lose as much as $70,000 in business value (sales, productivity, customer service and other tangibles), per hour of downtime. In nearly every scenario, turning system monitoring and maintenance over to a third-party vendor will save you more than it costs.

With 2012 an increasingly distant memory, have you considered how your IT systems fared last year? Are you implementing any big changes in 2013? If you are an upper-level IT executive, statistics show that this year, you are probably redirecting responsibility from internal staffers to third-party resources. Whether you are an IT exec or a business owner, if you don’t have such a plan, let me provide some insight into this trend and explain why this strategy makes sense.

According to a survey of CIOs (chief information officers) and other top IT execs by the Society for Information Management (SIM), internal staff consumed 33% of firms’ IT budgets in 2012, down from 38% in 2011 and 43% in 2010. In 2012, outsourcing accounted for 17% of those budgets. In 2013, survey participants expect outsourcing to jump to 23% of the IT budget.

This outlook reflects a reality that’s increasingly becoming an imperative for many companies. The cost of adding IT professionals to the payroll outweighs the expense of outsourcing IT responsibilities to third-party managed service providers. This is true, not only for management tasks such as IT systems maintenance and monitoring (more about that in a later article), but for project-based staffing, as well.

For small and medium-sized businesses (SMBs), it’s too expensive to have a full complement of tech staffers that can effectively cover every aspect of a firm’s IT systems and infrastructure. And, with IT evolving at lightning speed, it’s impossible for even the most talented IT pros to stay abreast of every breaking development. As a result, when SMBs embark on highly specialized projects, either the minimal in-house staff must ramp up quickly (with unpredictable results), or outside help must be brought in.

With on-demand IT staffing, firms can procure highly targeted expertise in small increments, ensuring all their projects are executed by someone who is competent in that particular IT discipline. These specialists provide the technical foundation to ensure project success and prevent budgets from scaling out of control. As a bonus, third-party, on-demand IT assistance is a ready resource that doesn’t ask for insurance and never calls in sick during crunch times.

Well-rounded IT service firms such as DynaSis offer this type of on-demand staffing, whether a company needs network engineers to design an expansion or a high-level planning expert to assist with a strategic infrastructure roadmap. Top IT execs are increasingly awarding a large percentage of their projects to outsourced experts. To learn how you can join them, give me a call.

by Dave Moorman

In January 2013, Internet security firm Kaspersky proclaimed that in 2012, spam hit a five-year low. Specifically, the report stated, “This continual and considerable decrease in spam volumes is unprecedented.”

However, before you and your employees dance in the streets at the thought of less spam, consider this: Kaspersky attributed the reduction, not to a lessening of spam messages, but rather to the success of spam-fighting technologies. In other words, spammers are still plying their nefarious trade, but they are less successful getting through.

Of even greater concern, cybercriminals (with whom spam is now a favorite target) are becoming increasingly malicious and inventive. Kaspersky described the range of subjects used in malicious emails as “impressive.” At DynaSis, we think “alarming” is a better description. For many years, malicious attackers have used tricks such as faked notifications and messages from a variety of legitimate (and fictional) sources such as credit card companies, financial and government organizations, and other trusted entities.

In 2012, criminals expanded their repertoire to include fake messages from airlines, coupon services, travel reservation firms and other leisure-industry firms. Some of these messages look like innocuous reservation confirmations and other routine communications. Others offer too-good-to-be-true “deals.” (Yes, the lure of saving big money continues to take down a lot of folks.)

Like other dangerous spam, these fake emails usually contain malicious attachments or links to malicious sites. Clicking them can do anything from installing a zombie bot that takes over your network to launching a worm that eats your data. And, because it happens inside your defense shields, it may go undetected.

For this reason, it continues to be utterly vital for SMBs to incorporate best-practices IT security management, including strong spam protection, into their overall IT strategy. If you are not absolutely certain your IT infrastructure is a veritable fortress and your email is effectively protected from spam, contact DynaSis for a no-strings-attached consultation.

Don’t count on your employees being savvy enough to outsmart malicious spammers. If they can trick the top management of Fortune 100 firms and global governments, they can dupe anyone.

by Dave Moorman

In the 1990s, terms such as telecommuting and teleworking became popular descriptions for the pursuit of work outside the office—with the help of technology. (Telecommuting usually referred to work at home; telework sometimes referred to work at a satellite office.)  Today, those terms are being replaced by newer catchphrases like workshifting and the remote [or mobile] workplace.

In reality, all these terms describe essentially the same activity—getting work accomplished anywhere that is not your main place of business. However, the way employees view these activities—and their popularity—varies widely. In a future post, I’ll talk about more about the benefits of other remote workplace solutions. Today, I’d like to share the benefits to SMBs in supporting telecommuting (using the definition of staffers working from home).

According to a study of 67,000 workers, published in June 2012 in the Department of Labor’s Monthly Labor Review, approximately 30% of surveyed workers perform some form of telecommuting (full or part time). That’s approximately the same as in the 1980s.

What’s changed to the benefit of their employees is the amount of extra time they put in at home. In complete opposition to the notion that telecommuters goof off more, the study indicated that those who perform any work at home tend to work five to seven hours more per week than if they weren’t telecommuters. Furthermore, 71% of the telecommuters were managerial or professional employees, who generally aren’t paid overtime.

Here’s some more good news. Despite the extra hours they have to put in, employees want to be able to work from home. In WIRED magazine’s recent reader survey, 62% said the ability to work remotely was important—and their favorite environment, by far (84%) was home. The survey also found that nearly 50% of workers felt more productive and less stressed working remotely.

So, not only are your employees going to work more hours when you let them telecommute at least some of the time, half of them will get more work done than at the office. That’s a powerful incentive to expand your remote working program.

Here’s the kicker: Even if you allow telecommuting, you have to provide the right environment for all these great numbers to fall into place. In the WIRED survey, 45% of respondents said they were encumbered by unmet needs outside the office. The number one complaint (82%) was lack of a high-speed Internet connection to access corporate resources.

DynaSis can’t help you persuade the territorial manager hesitant to give up direct control of his staffers (a leading impediment to the practice, per sources cited in the DoL report). We can help you transition to a cloud-based mobile productivity solution that always gives your workers high-speed, remote access to corporate resources, wherever they are. Give me a call soon and let’s discuss how you can extract the latent productivity in telecommuting.

by Dave Moorman

In the IT security world, service firms toss around terms such as Vulnerability Assessment and Penetration Testing as if everyone knows what they mean. This may leave you wondering, “What do these two processes do, and are they both important or do they cover the same ground, twice?”

Vulnerability Assessment (also called Vulnerability Analysis) is the process of identifying weak points on a network where a cyber-attacker could potentially gain access or otherwise do harm. Vulnerabilities can be anything from open ports (the “doors” that let data flow between devices on a network and the Internet) to open, rogue access points (unsecured, unauthorized Internet connection points). During a Vulnerability Assessment, specialized software scans and analyzes network traffic, connected devices and other elements of the network to identify flaws that increase vulnerability to attacks.

Penetration Testing, on the other hand, focuses on gaining unauthorized access to the system and its resources by simulating an actual attack on the network and/or its devices. Although Penetration Testing can reveal vulnerabilities, its goal is to determine what an attacker could do once he found the system’s flaws. Furthermore, Penetration Testing is often used as a way of validating whether or not implemented security improvements are working or holes can still be exploited.

The two processes work together in much the same way that a home security expert might examine your house for windows that are easy to open (Vulnerability Assessment) and then determine how difficult it would be to bypass your alarm system, open the windows and get inside to steal your jewelry (Penetration Testing).

In other words, Vulnerability Assessments tell you what within the network needs securing; Penetration Assessments confirm whether or not the network is actually secure. Both processes can play a role repeatedly throughout the lifecycle of an IT framework as new devices are added, network configurations change and other adjustments are made.

Most importantly, these two processes are part of an enduring IT security management effort designed to secure your system, its resources and its assets against intrusion, theft and exploitation. With companies from global conglomerate Sony to the smallest Mom and Pop shops falling victim to cyber-attacks, IT security is something no business owner should overlook. To learn more about security management and the role these two processes play, give us a call.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram