Quantcast
Channel: Auvik
Viewing all 721 articles
Browse latest View live

Does My MSP Need a Network Infrastructure Tool?

$
0
0

As a new or growing MSP, discovering tools and processes that will let you do more with less is a prime focus. You’re always looking for ways to boost that bottom line by cutting costs, driving more revenue, and improving overall efficiency.

If you have more than a handful of employees, you’ve likely already implemented good RMM (remote monitoring and management) and PSA (professional services automation) tools. After all, studies show that service providers running business systems such as PSA and RMM tools command higher rates, renew more clients, spend less time on data management, and are overall more profitable than companies without automation support.

Now you’re wondering about the next level. Maybe you’ve been worried about your lack of infrastructure visibility. Or maybe a network infrastructure failure has already bitten you.

Quite possibly you’ve been hearing rumblings in the industry about how infrastructure RMM can help you deliver complete network management and now you’re curious — What is it? And do you really need it?

What is network infrastructure RMM?

Network infrastructure RMM is like the RMM you know and love but specifically for infrastructure devices. Where something like LabTech or Continuum helps you manage servers and endpoints, a network infrastructure tool gives you visibility and control over devices like routers, switches, firewalls, and wireless controllers.

Network infrastructure RMM does things like:

  • Automatically map and take inventory of the network
  • Monitor and alert on network elements
  • Back up network configurations, and give you the ability to quickly restore a previous config
  • Offer secure remote access to infrastructure devices
network documentation Auvik network map

Photo: Auvik Networks

… But we already have an RMM tool

That’s great! But as I’ve pointed out, traditional RMM tools focus on endpoints and servers. They just don’t give you the management capability you need for infrastructure. A network infrastructure tool works as a complement to traditional RMM by extending its capabilities across the whole network.

… But we already monitor the network

A monitoring tool like PRTG or LogicMonitor provides great alerting. But how do you troubleshoot with the data you get? Only network infrastructure RMM combines alerting with documentation, automated mapping, configuration management, remote access, and more.

OK, but is a network infrastructure tool really necessary?

Let me answer that by turning this around and asking you a few questions.

  • How do you react when there’s a failure that takes a client down?
    That’s a trick question — because if you’re reacting, you’re already behind the eight ball. Downtime is bad for your clients and puts your relationship with them at a risk. The more it happens, and the longer it takes to fix when it does happen, the worse the erosion of client trust.
  • How often do you send a network engineer on-site?
    Every truck roll is an expense that eats away at your profit margin. Once your tech arrives, the clock starts ticking. Now — do they have up-to-date documentation? Do they know the problem they’re looking for? Or are they on a blind hunt to figure out what’s causing the issue?
  • How do you handle complaints about the Internet being slow?
    Maybe the culprit is the ISP, maybe it’s the guy in the corner streaming Netflix, or maybe it’s a pegged firewall. It’s hard to know where to even start looking. And if you don’t know where to look, you’re wasting time.
  • frustrated MSP technician

    Frustrated and wastimg time? / Photo: jseliger2on Flickr

  • How do you document your client networks?
    In a troubleshooting situation, it’s critical to know what you’re managing and how it’s connected. Otherwise, you’re wasting time. Again.
  • How do you pitch new business?
    It’s important to get an accurate picture of the environment you’ll be dealing with before you sign that contract. Once you build your quote and the client signs off, it’s nearly impossible to dig yourself out of a mess.
  • How do you onboard new clients?
    Does it take days, weeks, or even months?
  • How do you perform network configuration backups?
    Do your techs scramble to recreate the configs from scratch while your client fumes over the outage?

If any of these questions got you thinking about how much time your techs spend managing a client’s network and whether there’s a better (faster) way, then a network infrastructure tool could be a profitable addition to your business.

Dig deeper

The post Does My MSP Need a Network Infrastructure Tool? appeared first on Auvik Networks.


Hackerpocalypse: The MSP’s Role in Saving the World

$
0
0

Once upon a time, hacker mischief was limited to viruses like Mars Land, which took control of DOS computers to display scrolling red terrain, or Ithaqua, which displayed snow falling on the screen every April 29.

Fast forward to the present and the cute malware of the DOS age is long gone. Today, hackers pose a much more serious threat.

For many businesses, a managed service provider (MSP) makes the difference between a secure, well-managed network and one that places them at risk of crippling attacks. That’s one good reason why MSPs are indispensable today. (For another big reason, see As Network Value Skyrockets, MSPs Become Increasingly Essential.)

That was then

The reason hackers today pose a greater threat than ever is simple: The network is the lifeline that connects everything in our modern world. Anything that disrupts the network or steals information from it will have costly consequences.

That makes networks today different from previous decades. When viruses were being written for DOS PCs, most computers weren’t connected to a network at all. Back then, if hackers compromised one computer, the damage they could cause was usually limited to that host.

Even 10 years ago, when most computers were connected to the Internet, the potential danger posed by hackers was still much smaller than it is today. Networks at the time were not as expansive, meaning an attack against one computer didn’t put so many other devices on the same network at risk. Data was also still stored locally in most cases, rather than exchanged over the network, reducing the importance of the network as a potential attack vector.

hackers network security corroded computer components

Photo: Wonderlane on Flickr

The hacker threat today

Today, however, all that has changed. When you add together all of the PCs, servers, mobile devices, and Internet of Things (IoT) endpoints on a given network, networks at a small or mid-sized business can easily consist of hundreds or thousands of hosts, while enterprise networks are larger still. If a hacker is able to compromise just one device on the network, he has a stepping-stone for attacking all the others as well.

The cloud computing revolution has also changed the seriousness of network security. At many organizations today, the most sensitive data lives on cloud servers rather than local devices. That means it’s transmitted over the network to users, increasing the risk that hackers could intercept the data while it’s in transit.

The diversity of devices on modern networks, and bring-your-own-device policies that allow employees to add their personal computers or phones to the network, complicate matters even more. Traditional anti-virus and network intrusion systems don’t work as well when the types of devices and software environments they need to inspect vary much more widely than they did when most networks consisted mostly of servers and workstations.

Financial damages are just the beginning

Just how much havoc can hackers wreak on modern networks? To understand the issue in monetary terms, consider these staggering statistics from a report by Cybersecurity Ventures:

  • Cybercrime costs from fraud, lost productivity, theft and other damages in 2015 totaled $3 trillion—more than the gross domestic product of France.
  • Cybercrime costs are projected to climb to $6 trillion annually by 2021.

Other statistics worth noting include:

  • The average company in the U.S. loses $15 million per year to cybercrime.
  • Hackers offer their services for hire for as little as $100. That makes it trivially easy for a malicious party to target an organization’s network with an attack that could cost much, much more than the price of hiring a hacker.
network security hackers war army

J. Carr on Flickr

This is war

The danger posed by hackers isn’t limited to financial damages. In an age when the network controls everything, hackers who attack the network can cause crippling harm to public health, government services, and more.

Just last week, the US Computer Emergency Readiness Team issued an alert about the increasing threat to network infrastructure devices. “There has never been a greater need to improve network infrastructure security,” cautions the report.

“Cybercriminals are launching missives against a global attack surface comprised of the world’s people, households, companies, governments, police, hospitals, schools, banks, power grids, utilities, data centers, servers, networks, PCs, laptops, tablets, and smartphones,” writes Cybersecurity Ventures.

The threat will only grow greater, the group adds, as the expansion of the IoT increases the number of devices that hackers might attack to take physical control of homes, offices, and public services.

This danger is why MSPAlliance, at the start of 2016, identified critical infrastructure, such as water treatment facilities and power plants, as one of the new areas that MSPs can assume a key role in helping to keep secure.

The MSPAlliance also predicted MSPs will be called on to help fight the global war on terror, as governments pass laws requiring MSPs to assist in efforts to combat terrorist activities related to the network.

The MSP’s role in stopping hackers

When networks run the world, and you run the networks, you shoulder a huge responsibility. As an MSP, you are a main line of defense against hacker threats. To fulfill your mission, ensure the networks you oversee are locked down and hardened against attack as much as possible.

Explaining how to secure a network would require a much longer article than this. But briefly, the following principles help keep networks secure:

  • Maintain visibility. Being able to map a network and monitor device status constantly is crucial for finding and identifying threats.
  • Control access. Granular access control configurations mitigate the risk of users gaining access to devices they shouldn’t control.
  • Leverage the cloud. The cloud provides a central management hub for monitoring the network. It also separates the network from the management infrastructure, so that if the network itself is attacked, the network management platform is still secure.
  • Be agile. You can no longer count on controlling which devices join the network or on establishing firm network perimeters. As a result, you need to deploy network management solutions that can scale and adapt easily to meet the ever-changing needs of a client’s network.
  • Partner with the right people. If you lack the in-house security expertise to thwart hackers, take advantage of channel partnerships, like the one Joe Panettieri describes here, to boost your ability to protect client networks.

By delivering a secure network, you’re not only providing the best service to your clients, you’re also helping to stop the hackerpocalpyse and save the world.

The post Hackerpocalypse: The MSP’s Role in Saving the World appeared first on Auvik Networks.

6 Common Spanning Tree Mistakes and How to Avoid Them

$
0
0

Let me start by saying that spanning tree is a Good Thing. It saves you from loops, which will completely shut down a network. But it has to be configured properly to work properly. I can’t count the number of times I’ve had a client call me, desperate with a terribly broken network, and I’ve responded, “Sounds like a spanning tree problem.”

There are many ways things can go wrong with spanning tree. In this article I’ve collected a few of the recurring themes.

  1. Not configuring spanning tree at all

    As I said, spanning tree is a good thing. But for some reason, a lot of switch vendors disable it by default. So out of the box, you might have to enable the protocol.

    Sometimes people deliberately disable spanning tree. The most common reason for disabling spanning tree is that the original 802.1D Spanning Tree Protocol (STP) goes through a fairly lengthy wait period from the time a port becomes electrically active to when it starts to pass traffic. This wait period, typically 45 seconds, is long enough that DHCP can give up trying to get an IP address for this new device.

    One solution to the problem is to simply disable spanning tree on the switch. This is the wrong solution.

    The right solution is to configure a feature called PortFast on Cisco switches. (Most switch vendors have a similar feature.) You configure the command “spanning-tree portfast” on all the ports connecting to end devices like workstations. They then automatically bypass the wait period and DHCP works properly.

    It’s important to only configure this command on ports that connect to end devices though. Ports connecting to other switches need to exchange spanning tree information.

  2. Letting the network pick your root bridge

    As the name suggests, spanning tree resolves loops in your network by creating a logical tree structure between the switches. One switch becomes the root of the tree, and is called the root bridge. All other switches then figure out the best path to get to the root bridge.

    If there are multiple paths, then on each switch, spanning tree selects the best path and puts all the other ports into a blocking state. In this way, there’s a single path between any two devices on the network, although it might be rather circuitous.

    Every switch taking part in spanning tree has a bridge priority. The switch with the lowest priority becomes the root bridge. If there’s a tie, then the switch with the lowest bridge ID number wins. The ID number is typically derived from a MAC address on the switch.

    The problem is that, by default, every switch has the same priority value (32768). So if you don’t manually configure a better (lower) bridge priority value on a particular switch, the network will simply select a root for you. Then Murphy’s Law applies. The resulting root bridge could be some tiny edge switch with slow uplinks and limited backplane resources.

    To make matters worse, a bad choice of root bridge can make the network less stable. If there’s a connectivity problem that takes any random switch off the network, spanning tree heals rather quickly. But if the root bridge goes down, or if the failure means that some switches no longer have a path to the root bridge, this constitutes a major topology change. A new root bridge needs to be selected. The entire network will freeze during this time and no packets can be forwarded.

    I always recommend making the core switch the root bridge. I also like to select a backup root bridge. If there are dual redundant core switches, then one is the root bridge and the other becomes my backup.

    Set the bridge priority on the primary root bridge to the best possible value—4096—and the backup root bridge to the next best value—8192. Why these funny numbers? Well, that’s a longer story that we don’t have space for here, but the lower order bits in the priority field have another purpose, so they aren’t available for use as priorities.

  3. spanning tree mistakes loops

    Spanning tree saves you from loops / Photo: darkday on Flickr

  4. Using legacy 802.1D

    The first open standard for spanning tree is called 802.1D. It’s one of the earliest standards in the IEEE 802 series of standards that includes the specifications for every type of Ethernet and Wi-Fi as well as a bunch of other protocols. It works well despite its age, and you’ll find this type of spanning tree on just about every switch. Any switch that doesn’t support 802.1D is only useful in small isolated environments, and should never be connected to any other switches.

    But there have been several important advancements to spanning tree since 802.1D. These
    improvements allow sub-second convergence following a link failure, as well as the ability to scale to larger networks and the ability to actually have different spanning tree topologies and different root bridges for different VLANs. So it makes a whole lot of sense to use them.

    Most modern Cisco switches default to a protocol called Per-VLAN RSTP. This stands for Rapid Spanning Tree Protocol. It automatically operates a separate spanning tree domain with a separate root bridge on every VLAN. In practice, it’s common to make the same switch the root bridge on all or most of the VLANs, though.

    The rapid feature or RSTP is what you’ll probably find most useful. This allows the network to recover from most failures in times on the order of 1 to 2 seconds. Multiple Instance Spanning Tree, or MST, is similar to RSTP. The main difference is that you can designate groups of VLANs that are all part of the same tree structure with a single common root bridge. However, I recommend using Per-VLAN RSTP in most cases because it’s easier to configure. Also, I’ve encountered some interoperabilty problems with MSTP between different switch vendors.

  5. Mixing spanning tree types

    It should be pretty clear from the descriptions of 802.1D, RSTP, and MST in the previous section that mixing them could get messy. The RSTP and MST protocols have rules for how to deal with this mixing, and in general it involves creating separate zones within the network for groups of switches running different flavours of spanning tree. This rarely results in the most efficient paths being selected between devices.

    The only really valid reason to mix spanning tree types is to allow the inclusion of legacy
    equipment that doesn’t support the more modern protocols. As time goes by, there should be fewer and fewer of these legacy devices, and the number of places where it makes sense to mix the protocols should becomes smaller.

    I recommend picking one, preferably RSTP or MST, and just using that in a consistent manner across all of your switches.

  6. spanning tree mistakes loops

    Photo: olle svensson on Flickr

  7. Using MST with pruned trunks

    Because MST allows a single spanning tree structure that supports multiple VLANs, you need to be extremely careful about your inter-switch trunks.

    I once had a client with a large complicated network involving many switches and many VLANs. They were running MST. For simplicity, they had designated a single MST instance, meaning that all VLANs were controlled by the same root bridge.

    The problem for this client arose when they decided that certain VLANs should only exist on certain switches for security reasons. All perfectly reasonable. So they removed the VLAN from the main inter-switch trunks, and added new special trunks just for these secure VLANs. And everything broke.

    MST considered all VLANs to be part of the same tree, and it selected which trunks to block and which to forward based on that assumption. But in this case, because some VLANs were only present on some trunks and other VLANs were present on the other trunks, blocking a trunk meant only passing some of the VLANs. Blocking the other trunk meant only passing the other set of VLANs. For the blocked VLANs there was simply no path to the root bridge at all.

    So, if you’re going to use MST, you need to either ensure that all VLANs are passed on all trunks, or you need to carefully and manually create different MST instances for each group of VLANs with special topological requirements. In other words, you have to do careful analysis and design the network properly. Or you could take the easy way out and run Per-VLAN RSTP.

  8. Conflicting root bridge and HSRP/VRRP

    Another common topological problem with spanning tree networks involves the way that Layer 2 and 3 redundancy mechanisms sometimes interact.

    Suppose I have a network core consisting of two Layer 3 switches. On each segment I want these core switches to act as redundant default gateways. And I want to connect all of the downstream switches redundantly to both core switches and make spanning tree remove the loops.

    In this scenario, the spanning tree root bridge for a particular VLAN might be on one of these core switches and HSRP/VRRP master default gateway on the other switch. Then an Ethernet frame originating on one of the downstream switches destined to the default gateway will need to take an extra hop, going first to the root bridge, and then to the secondary core switch that currently owns the default gateway IP.

    Normally this isn’t a problem, but imagine that I’m passing packets between two VLANs, both with Core Switch A as the root bridge and Core Switch B as the default gateway. Every packet must go up to Core Switch A, and cross the backbone link to get routed on Core Switch B.

    Then it has to cross the backbone link again to go back to Core Switch A to be delivered to its destination. All of the return packets must also cross the backbone link twice. This creates a massive traffic burden on the backbone link where every packet in both directions must cross twice. It also incurs a latency penalty as every packet needs to be serialized and transmitted twice. Even on 10Gbps links, this will typically cost a couple of microseconds in both directions, which could add up for particularly sensitive applications.

    Suppose instead that the default gateway was on the same switch as the root bridge. Now the packet goes up to the root bridge, Core Switch A, and gets routed between the VLANs and immediately switched out to the downstream device. It doesn’t cross the backbone at all in either direction.

Spanning tree is a terrifically important protocol. It allows us to build redundancy into inter-switch connections. It saves us from catastrophic loops when somebody accidentally connects things they shouldn’t.

It’s true spanning tree can be misconfigured with bad consequences, but this possibility shouldn’t discourage you from using it. The solution is to be careful and deliberate about your network design.

The post 6 Common Spanning Tree Mistakes and How to Avoid Them appeared first on Auvik Networks.

The 5 Levels of MSP Operational Maturity

$
0
0

Paul Dippell is the co-founder and CEO of Service Leadership, Inc., a company that measures IT and managed service provider (MSP) performance across the industry and annually publishes the results as the Service Leadership Index®. In this interview, we ask about his work in benchmarking managed service providers.

What is operational maturity for an MSP?

At Service Leadership, we define five levels of operational maturity specifically for solution providers.

Operational Maturity Level© 1 – Beginning
Low to negative financial performance and inconsistent service quality. They don’t know what they don’t know. Operations are largely trial and error.

Operational Maturity Level 2 – Emerging
Low to negative financial performance and inconsistent service quality but starting to understand the basics of profit levers. Few controls and little forward planning. Incentive compensation isn’t meaningful and/or is poorly aligned.

Operational Maturity Level 3 – Scaling
Median financial performance and service quality. Basic controls. Some forward budget planning, little attainment tracking. Incentive compensation is meaningful in scope but not tied to budget attainment.

Operational Maturity Level 4 – Optimizing
High financial and service quality performance. Robust controls. Detailed forward budgeting and attainment tracking. Incentive compensation is meaningful in scope and tied to budget attainment.

Operational Maturity Level 5 – Innovating
Highest financial performance and highest value and quality services. The characteristics are similar to OML 4 but they also now extend capabilities to lines of business adjacent to IT.

What’s the correlation between Operational Maturity Level and financial performance?

There’s a strong positive correlation.

The Service Leadership Index reports that solution providers at a higher OML consistently deliver EBITDA percentage (earnings before interest, taxes, depreciation and amortization) about three times higher than those with a median OML. Meanwhile, the firms with the lowest levels of maturity regularly operate at zero profit or below — at least until they either improve towards median, cease to exist, or sell.

According to our Service Leadership Index, solution providers at OML 1 or 2 were delivering EBITDA of 0.6% or lower in Q2 2016. This is after adjusting to ensure fair market owner compensation is being taken from the income statement and not balance sheet.

MSPs at OML 3 are most often delivering at or near median financial performance, which in Q2 2016 was 9.6% adjusted EBITDA.

MSPs at OML 4 or 5 are most often delivering top financial performance, which in Q2 2016 was at or above 18.7% adjusted EBITDA. The top quartile has been at about this level for the last eight years.

By the way, it’s common to assume that larger solution providers must be higher in OML. After all, it takes more management skill to grow and manage a bigger company, right? The answer is, OML has nothing to do with company size.

We know this because it’s entirely possible, and sadly not uncommon, to find large solution providers — up to billions in revenue — who don’t generate profit and who struggle to consistently deliver quality services. Conversely, it’s possible to find a $1 million solution provider who operates at OML 4 or 5 and gets very good results. And both can be found in between those sizes as well.

What does all that mean for MSPs?

Clearly, the management teams of the top performers are doing things more effectively than those of the median performers, who in turn are doing things more effectively than the bottom performers.

The things each management team are doing are the same across all these firms, but the lower profitability, lower growth firms are doing them in less efficient and effective ways.

By segmenting all the areas of management skill into highly granular, incrementally improving steps, we can tell a management team what their current level of operational maturity is, and more importantly, exactly what they must do in each area to operate more like the top performers and get closer to their desired results.

How do you know what those steps are?

OMLs are made up of traits — things that every MSP must do to have at least a basic, reliable business model. And they are the things you must do very well to have a top business model.

There are 30 to 39 OML traits, depending on your Predominant Business Model. Some sample assessment questions to uncover these traits include:

  • How is time tracked?
  • How is pricing determined?
  • How are products, solutions and services delivered and how are accounts managed?
  • How are people hired, on-boarded and terminated?
  • How are new offerings developed, approved, rolled out, and driven to success?
  • How are vendor relationships managed?
  • How is financial management performed?

We ask these questions of low- and high-performing solution providers around the world, so we know how firms of differing financial performance answer them. We also know exactly what needs to change to move an MSP to the next OML.

How did you go about developing the five OMLs?

My co-founder and I come from solution provider backgrounds. We had built four large solution provider businesses in a row, including MSPs, and were looking for ways to drive higher financial results. Each of the four companies was a different size and targeted different markets, so what worked well for one didn’t necessarily achieve the same results in the other.

We discovered there’s no school for solution provider owners on how to run the business, or how to add new solutions or services, or how to evolve from a current business model to a new one. To the extent that such guidance existed, it was generally very tactical — as in certification training and best practices for starting new solution practices — and missed the foundational aspects needed to make building a company as safe and profitable as possible.

We were looking for baseline practices that would help all of our companies perform at a high level even though they were different. We found those and they enabled us to drive our four companies to $130 million, $2 billion, $400 million, and $60 million in revenues respectively. Then we started Service Leadership to help others achieve similar success.

Before we developed the Service Leadership Index for solution providers, no other benchmarking method existed that was specific to the industry. Now MSPs can have the same quality and depth of benchmarking we had in our own companies, except they’re compared to the best-in-class across the whole industry.

We first built the OML approach to enable our own management teams to best perform. When you run a solution provider business with 9, 44, 14, or 18 locations across the country, as we have, you can’t afford to have a branch that doesn’t perform. You also can’t afford to have a location general manager say, “That solution or service or best practice doesn’t work here,” because you’ll soon have chaos.

So we developed the OML approach to understand each location’s management team skill and capability level, and to guide them step by step to higher performance. Now we apply it to independent solution providers.


The terms and concepts Service Leadership Index® (S-L Index™), Predominant Business Model© (PBM©) and Operational Maturity Level© (OML©) are proprietary to Service Leadership, Inc.  All rights reserved.

The post The 5 Levels of MSP Operational Maturity appeared first on Auvik Networks.

Secrets of the MSP Pros: Jason Caras

$
0
0

In this ongoing series, we profile MSP (managed service provider) executives by asking them questions big and small. Have a question you think we should ask? Want to be profiled next? Let us know at blog [at] auvik [dot] com.

Jason Caras, Co-CEO, IT Authorities Inc.

Jason Caras
Co-CEO, IT Authorities Inc.


Year IT Authorities was foundedIT Authorities logo
2006

Number of employees
92

Percentage of business that’s managed services
70%
inc 5000 logo
Your favorite conference and why
The Inc. 5000 because those are the companies that are making the biggest difference. You get to rub elbows with other companies that are growing quickly and contributing to the economy. They’re all rainmakers.


Where you get your industry news
Google

Your regular coffee order
Large dark with cream


What’s on your smartphone home screen
My two children
clock icon
How you run a meeting
On time, to start. Then I like to empower others to shine. I don’t like the meetings to be about me, but about the other folks there.


A typical work day for you
Couple of hours with my kids in the morning. Drop off my daughter at school. Grab breakfast at the gym. Get into the office around 9. Usually in meetings from there until about 3. Work out at 4. Back home for dinner with my kids.


Someone in computer networking whom you admire
My business partner, Jason Pollner. He’s super intelligent, very measured, patient, very detailed and super organized.


The one trait you look for in new hires
Coachability. Well, I look for culture fit first. Is that a trait of theirs – no? So a trait of theirs is that they’re coachable.
Streaks iphone app
Your best personal productivity tip
Having tools to hold yourself accountable as to whether you did what you said you would do. I use an app on my iPhone called Streaks. It gives your visual clarity as to what you did or did not do. That’s what life boils down to – you either did it or you didn’t.

Peer or business networking groups you belong to
CEO Council of Tampa, Leadership Tampa Bay, Tampa Bay Professional Alliance, Business Breakfast Group 


What you’d be doing if you weren’t an MSP
Public speaker and author. I do a lot of speaking now, particularly talking to kids in detention centers and the juvenile justice system on how to transform their lives.


A geeky secret you have
I don’t know. I love gadgets — that’s about it.

audio books
Podcasts — yes or no?
Personally, no. I think they’re great but I don’t personally listen to them. I listen to audio books — mainly on leadership and personal development.


What you wish all software vendors knew
Our customers expect consistency.


Best piece of conference swag you ever received
I don’t go to a lot of conferences so I don’t know.

A book you recommend for MSPs
I haven’t been impressed with anything specifically for MSPs that I’ve read over the years. Traditionally the MSP market has been focused on small MSPs catering to small businesses. That’s not necessarily our market. I find there’s not a lot out there about how to swim upstream and establish a higher level of enterprise business. On general business topics, I’m a big fan of John Maxwell on leadership.


The Terminator movies — science fiction or the future?
Definitely the future. We’re seeing it now. It’s crazy.

The post Secrets of the MSP Pros: Jason Caras appeared first on Auvik Networks.

IT Asset Disposition: An MSP Opportunity That’s Anything But Trash

$
0
0

Computers double in power every couple of years, according to Moore’s Law. That’s good news for companies and their people—continually faster and smarter gear can power higher productivity.

But there’s also a downside to Moore’s Law: devices go obsolete quickly. The average server, for example, is used for only two to four years before it’s replaced. And network switches often last just three to five years before breakdowns become a common concern.

Those short lifecycles can spell big IT headaches for businesses. Outdated devices are more likely to create security vulnerabilities in the network and can significantly increase the chance of network outages.

You know this — managing old gear is the bane of many an MSP’s existence. But why not turn that problem into a revenue stream by offering end-of-life services, also known as IT asset disposition, to your clients?

Properly disposing of equipment isn’t easy for your clients

To start, a lot of companies may not even have a good idea of what’s on their network, let alone what gear might be past its prime.

If they can identify risky equipment, removing it safely from the network and properly disposing of it can be well beyond their capabilities. After all, you can’t just unplug a server and throw it in the trash.

Ensuring smooth device shutdown and the introduction of new equipment in a way that avoids business and network disruption can be tricky.

What’s more, computer hardware contains pollutants that pose environmental hazards, which means they need to be taken to special recycling centers.

And devices may also contain sensitive data, which could potentially be recovered even if it’s been “erased” from disks. In fact, a recent study shows that more than half of IT pros don’t erase data properly, leaving their companies at risk of privacy and compliance breaches.

For all these reasons, your clients need help with their IT asset disposition (ITAD).

The IT asset disposition opportunity

A recent survey by Cisco revealed that a whopping 73% of companies are using “vulnerable, end-of-life networking equipment” that they should be looking at replacing. Anecdotally, you no doubt have client stories that support those findings. So the market for ITAD services is definitely rich.

Your clients could go out and hire a specialist firm that does ITAD and nothing else. But you’ve got a couple of advantages over that option.

One, you have an established relationship with your clients. If they work with you to dispose of old equipment, they don’t have to find and vet a new company, and they don’t need to pay a separate bill.

Two, you can position your offering as a complete lifecycle service, from equipment purchase all the way through to disposal. It’s a tidy package that offers convenience to your client, and helps you capture a greater share of the client’s wallet.

Three, you already know your client’s network. You have an inventory of their gear and can easily track and flag equipment that’s coming to the end of its useful life. You can do this type of auditing constantly and proactively, something an ITAD firm can’t do. And you can provide enough advance warning that new expenditures can be worked into future budgets.

Even if you don’t want to handle the nitty-gritty of hard drive degaussing and equipment disposal, you can partner with an ITAD company and white label their services. You and your clients still get all the benefits listed above.

“Channel partners that have made a living installing and managing IT gear can now help customers get rid of it as well,” writes TechTarget in an excellent article on the opportunities to be found in ITAD. (The article also mentions several ITAD firms that are open to partnering with MSPs.)

It’s a service line worth considering.

The post IT Asset Disposition: An MSP Opportunity That’s Anything But Trash appeared first on Auvik Networks.

Implementing ACLs on Cisco ASA Firewalls

$
0
0

The first line of defense in a network is the access control list (ACL) on the edge firewall. Some vendors call these firewall rules or rule sets or something similar. To keep the discussion focussed, this post will look only at the Cisco ASA, but many of the ideas are applicable to just about every device on the market.

Cisco uses ACLs for many other purposes besides controlling access. ACLs can define which routes will be distributed over a routing protocol. They can control quality of service (QoS) rules and other policies as well. But for this article I just want to talk about the ACLs that filter traffic flowing into, through, and out of the firewall. Just about every firewall implementation will need this kind of ACL.

The challenge is that while these ACLs can be fairly simple in concept, they quickly become large and unwieldy if they aren’t carefully organized and managed.

5 general rules for building ACLs

I use several general rules when building and applying ACLs to interfaces on an ASA firewall. Note that this is simply how I do it. They aren’t hard rules, but they are based on many years of experience and a lot of mistakes and places where I painted myself into a difficult corner.

My goal is always to make the intent of the configuration as clear as possible, and to make it easy to maintain and update the firewall over time.

I always construct my rules using the command-line interface (CLI). I don’t like using the Cisco ASDM web interface to configure ASA firewalls because I don’t find it makes anything easier to understand or faster to deploy.

When I first started using ASA firewalls, I did use ASDM. But most of us who work with these systems regularly find the CLI easier in the long run.

One of the biggest differences is that the ASDM interface automatically creates a lot of object groups that wind up having arbitrary and meaningless names. This alone I consider to be so unhelpful that I always encourage administrators to build their ACLs from the CLI.

The first rule is to always apply ACLs inbound on all interfaces. Every interface should have an ACL, even if it’s a trivial single line.

I don’t like to apply ACLs outbound on the interfaces because I want to use the firewall’s internal compute and memory resources as efficiently as possible. I don’t want to receive a packet, perform address translation and application inspection, scan it against IDS rules, and then put it in the outbound buffer, only to drop it. That’s rather inefficient. Since firewalls are often performance bottlenecks in networks, I would prefer to apply those resources more carefully.

This is true for the modern Cisco ASA 5500x devices that have the internal SourceFire IDS/IPS devices, which are the focus of this particular article, but also true for essentially every next-generation firewall on the market. If we know we’ll never accept a packet from North Korea, let’s just drop it and not bother to inspect it any further.

There are exceptions, of course. Cisco wouldn’t have included the ability to apply outbound ACLs if they weren’t needed. But I would consider instances where outbound rules are needed as corner cases that can often be avoided if you design your rule sets more carefully.

The second rule is to name the ACL after the interface on which the rule will be applied. I like to name my main interfaces inside and outside. If there additional interfaces such as DMZs, I try to give them clear and simple names as well, like dmz.

The ACL applied to the inbound path on my inside interface will be inside_in. The one on the outside interface will be outside_in. If I had an outbound ACL, it might be called outside_out.

Normally I modify these ACLs incrementally, simply adding or deleting individual lines one at a time. However, if I need to do a wholesale change, I generally keep the old ACL and create a new one, but add a simple revision number to the name, such as inside_in_2. This way, it’s very easy to roll back the change to the old version if there’s a problem.

The third rule is to use remarks in your ACLs to internally document your intentions. The more you can make the configuration of your firewall self-documenting, the easier it will be to manage it going forward. (I’ll show some specific examples of remark lines a little later).

The fourth rule is to use object-groups. An object-group is a convenient way of organizing things like IP addresses or protocols. Using object-groups allows you to create an access rule for one group of hosts to access another group of hosts over a common set of protocols with a single command, as long as you’ve already defined those groupings. The other big advantage of object-groups is that you can re-use them.

For example, I might want to block a particular set of malicious IP addresses from ever accessing my network from the outside. If I use the same object-group on the inside interface, I can also prevent anybody inside my network from ever accessing these same malicious external hosts. And if I add a new host to that object-group, I automatically update both those inbound and outbound rules.

The other important thing to mention about object-groups is that they should have meaningful names. If you have an object-group for standard web protocols like HTTP and HTTPS, give it a name like WebProtocols, making it obvious what it is and how you plan to use it.

And finally, the fifth rule is that it’s really important to make your ACL as specific as possible. Don’t permit “any” hosts if you can narrow it down. Make those “permit” rules as specific as possible. The same goes for protocols. Don’t permit all IP protocols if you really mean a particular protocol. Firewalls are security devices, so don’t undermine your security.

Overall ACL structure

ACLs are executed sequentially for each new session. If there’s a match, either permitting or denying the packet, the firewall stops checking. So order matters. I’ve seen many cases where certain lines in an ACL never have an effect because an earlier rule overrides them.

I like to build my ACLs in a structured way. First, I include a relatively small and very specific whitelist. It includes things that I know are always allowed, and overrides any blacklist rules that might come later.

For example, I might want to always permit a VPN tunnel from the static IP address of a remote office. At one client site, I created a rule that would be updated regularly that included the IP address of the hotel where a senior executive was staying that week. I removed it when they returned to the office.

Most of the time, I also include a general blacklist, then all of the specific rules.

Start each section with a remark explaining what it is. Each individual rule should also have a separate remark. If the client has a good change control system, I like to also include the ticket number and initials of the engineer making the change in each of the individual rule remark lines.

The following configuration fragment shows a very simple example, with only a single rule in each section.

!
access-list outside_in remark Section 1 - Specific whitelist 
access-list outside_in remark Temporary exception - #50662 - 2016-10-20 - KD
access-list outside_in extended permit tcp object-group SPECIAL_DEVICES any eq http
access-list outside_in remark Section 2 - General blacklist 
access-list outside_in remark Suspicious Ranges - #11246 - 2015-11-05 - KD
access-list outside_in extended deny ip object-group SuspiciousRanges any
access-list outside_in remark Section 3 - General whitelist 
access-list outside_in remark web servers - #24548 - 2016-08-19 - KD
access-list outside_in extended permit tcp object-group any WebServers object-group WebProtocols
access-list outside_in remark Section 4 - Specific rules
access-list outside_in remark mail relay - #10456 - 2015-07-29 - KD
access-list outside_in extended permit tcp object-group MailRelay object-group MailServer object-group MailProtocols
!
access-group outside_in in interface outside

That last “access-group” command is what we use to apply this particular ACL to the interface. In this case, it’s applied inbound to the interface named outside.

The blacklist

Now let’s look at the sections in more detail. The general blacklist is usually a list of sites or IP address ranges representing geographic regions that I will never accept anything from. For example, if the organization never expects legitimate traffic from a residential broadband ISP in suburban Moscow or sub-Saharan Africa, block it. And if you’re being attacked from a web hosting service in Romania, block it. I often do this all in a single object-group that I call SuspiciousRanges.

Note that this is also why I put my specific whitelist and temporary exceptions above the general blacklist. If there’s a legitimate customer in Lagos, Nigeria, or an executive who likes to holiday in Moscow, I can put these specific addresses into my whitelist without weakening my blacklist.

Then one of my daily activities is to monitor the firewall logs for people attacking my network, and to add their IPs to this object group. I always look up the IPs against DNS and see what they are. If the source is an ISP in a country where I never expect to see legitimate traffic, then I might block the whole allocated range for that ISP. Otherwise, I might just block the single host that’s attacking me.

!
object-group network SuspiciousRanges
 description Hosts and networks to be blocked
 network-object 175.45.176.0 255.255.252.0
 network-object host 192.168.254.254

The above example object-group has only two useful lines. This particular object-group will generally grow over time to be extremely large.

I like to keep a spreadsheet just for this rule, explaining why I’ve included each line. If it was an attack, then I include the date of the attack and the specific IP addresses that originated the attack. This way, if there’s ever a complaint from a user saying that a legitimate host has been blocked, I can look back and see whether that particular host is collateral from an overly general rule. This can happen in particular with large web hosting services, where IP addresses might be re-used, or where there are many customers and only some of them are malicious.

Counters and statistics

One of the most useful but neglected features of Cisco ASA ACLs is the statistical data provided by the “show access-list” command. This command conveniently provides a counter of the number of times each rule was matched.

And, in the case of object-group lines, which could include hundreds or thousands of individual entries, it breaks out the hit counts for each individual entry. These counts give you an instant way of telling whether a particular rule is being used.

Such information can be helpful when trying to troubleshoot why a particular traffic pattern is either passing or being blocked incorrectly. Suppose, for example, that you’re trying to allow a particular external network to access some sensitive DMZ server, but it isn’t working. To accommodate the access, you probably added a narrowly defined “permit” command to your outside_in ACL. Use the “show access-list outside_in” command and find the line you created for this purpose. If it has a hit count of zero, then you know some other command higher up in the ACL is blocking your special access.

The other thing I often use the counters in the “show access-list” output for is to see whether specific lines are being used at all. If they’re never used, they might be unnecessary, making them candidates for removal next time I’m cleaning up my firewall configuration.

It’s also useful to remember that every ACL ends with an implicit “deny all” rule. This means that anything not explicitly allowed will be rejected. However, it’s often convenient to make this explicit and end the ACL with a “deny ip any any” rule.

The advantage to doing so is that the “show access-list” command will give you statistics on how often packets are being rejected by falling off the end of the ACL. If I’m doing formal statistical analysis, then I can add up all of the individual counters and get reliable percentages of how many packets are being rejected or accepted by each individual command.

A few additional thoughts

One of the more interesting features of these ACLs is the ability to use Fully Qualified Domain Names (FQDN). It’s a relatively new feature, which I presume Cisco added because every other firewall vendor had such a feature. However, I have to say I don’t really like to use DNS-based rules in my firewall ACLs.

Here’s my worry. Suppose somebody knew that I trusted, say, www.auvik.com by name in my firewall. They could launch an attack in which they hit my firewall with bogus DNS packets and trick it into accepting packets from some other network.

There’s a workaround to this problem, though. If you use FQDN-based ACL entries, you can (and should) enable the “dns-guard” feature on your firewall. It’s an inspection rule that validates DNS responses.

Another thing to consider when building ACLs is that they’re static and based purely on Layer 3 and 4 features like IP addresses and port numbers. This doesn’t really help you if you’re concerned about more sophisticated attacks.

For example, if I use an ACL to allow HTTP access to my web server from the Internet, the firewall passes all of those HTTP packets. In fact, it will pass anything that has the right TCP port information, regardless of whether it’s actually HTTP. And it will completely miss more sophisticated attacks like SQL injections, cross-site scripting, and so forth. So I like to think of my firewall ACL as simply a first order traffic filter to give the intrusion detection/prevention system (IDS/IPS) a slightly easier job.

Cisco’s current generation of ASA 5500x firewalls include the option of running SourceFire IDS/IPS software on a virtual machine inside the firewall. Or you could use some other vendor’s IDS/IPS. The important thing is to remember that you aren’t secure just because you’ve built a rock solid ACL.

The post Implementing ACLs on Cisco ASA Firewalls appeared first on Auvik Networks.

Threat or Gold Mine? What Lies Ahead for Managed Services

$
0
0

Our world is in the midst of being redefined. There’s a revolution afoot. A disruption that will affect every business in every industry.

It’s coming for the auto, trucking, and taxi industries.

It’s coming for the energy sector.

And it’s coming for managed services, says ConnectWise CEO Arnie Bellini.

The third major age of computing

I recently had the pleasure of hearing Arnie keynote an HTG event for service desk executives. He shared his views on the three major ages in the history of modern computing.

First was the age of smart businesses. Starting in the 1960s, computers became affordable enough that a few thousand businesses could buy them.

future of managed services smart businesses

Second was the age of smart people. It started with the boom in personal computers and has continued through smartphones and tablets.

future of managed services smart people

Now we’re at the start of the age of smart things. Anything and everything will be aware and online.

future of managed services smart things

Chuck Robbins, CEO of Cisco, describes the shift like this:

“We live in a world where everything is going to be connected. It’s happening. And … we’ve seen this movie before. We’ve connected disparate systems, we’ve connected disparate technologies, disparate protocols, and we’ve created greater value for our customers as a result of all of those connections and the value that we can create above it. And what’s going to happen now is it’s going to happen across every industry at once. Every industry is going to happen at the same time. So we’re literally going to go through the biggest technology transformation that we’ve seen in our lifetime.

Arnie related the tale of his dad, a salesman for IBM in the ‘50s and ‘60s. When personal computers started emerging, IBM’s official take was that there would never be much of a market for them.

That blindness to the change that was occurring left the door open for Microsoft and Apple to rise to prominence and steal IBM’s lunch.

Now, MSPs are in the midst of a similar change. Arnie said you can ignore it — like his dad and the others are IBM did until it was too late—or you can adapt.

Opportunity is calling

A look at the basic facts shows you the opportunity that exists.

  • The number of devices is skyrocketing
  • The number of apps, both broad and niche, is skyrocketing
  • The number of things being interconnected is skyrocketing
  • The amount of data being generated is truly staggering — and skyrocketing ever higher

overwhelmed businessman future of managed services

Your clients need you

The result is a bewildering set of choices and processes facing every business.

First, what combination and configuration of devices, apps, interconnections, and data will help them achieve a particular business goal? Multiply that by several goals.

Once they know what they want to set up, how will those devices, apps, and data be set up, configured, and connected? Who will do it?

And after a system is up and running, how will it be maintained? The technology of the future will break down, wear out, overload, overheat, throw errors, and just plain not work in the same way our current technology does. Somebody has to know how to fix it.

“If that doesn’t scream opportunity to you, nothing else will,” said Arnie. “The concept of trusted advisor is more relevant than ever and will continue to become more important as technology becomes all pervasive.”

The future of managed services: It’s do or die

The age of smart things has already begun and is advancing rapidly. Do you have five years to adapt your service provider business? Probably. Ten years? Almost certainly not.

David Cheriton, a computer science professor at Stanford, put it this way at a recent ONUG conference: “Automation or annihilation: Your choice!”

David Cheriton slide, ONUG16  talk

Photo: Alissa Irei on Twitter

Robbins says:

“This [revolution] is going to force all of us to move at a pace that is uncomfortable. And we’re going to have to make decisions sometimes when we have about 80 percent of the information that we’d like to have, and then we’re going to have to adjust. And if you wait until you have every ounce of information that you need to make a decision because you’re afraid to make a mistake, you will die.”

Recent CompTIA research indicates MSPs still have a lot of work to do to take advantage of new technologies — but the prospects are rosy.

“Many jobs were threatened by the technical advances of the Industrial Revolution, and those specific job categories saw reductions,” says Seth Robinson, CompTIA’s senior director of technology analysis. “However, entirely new categories were created, leading to a net gain in jobs and overall growth.”

“This is the shift that will dominate the future of IT services,” said Arnie. “It’s a gold mine, not a threat.”

His advice? You don’t need to radically transform your business right now. But start to pivot by adding new services and practice areas. Take your clients to the cloud. (“If you don’t, someone else will.”) Look into security services, business intelligence, automation, and bots.

And always remember the bedrock of managed services: great service. That’s one thing that will never change.

The post Threat or Gold Mine? What Lies Ahead for Managed Services appeared first on Auvik Networks.


The 4 Service Margin Levers an MSP Can Control

$
0
0

Hear Rex Frank explain the 4 service margin levers in our Frankly MSP podcast interview.


Savvy service executives know that what matters at the end of each month, quarter, and year is gross service margin. We measure margin in both gross margin dollars and gross margin percent.

Here’s how you calculate each figure:

Gross margin dollar = (all service revenue – (all service hard costs + all service labor))

Gross margin percent = (gross margin dollars / all service revenue) * 100

Let’s say our monthly service revenue is $110,000. Our hard costs for tools such as endpoint RMM, network RMM, antivirus, and so on are $10,000. And our cost to pay the engineers, dispatcher, and service executive is $33,000.

Here’s how that looks using the formulas:

Gross margin dollars: ($110,000 – (10,000+33,000)) = $67,000

Gross margin percent = ($67,000/110,000) * 100 = 61%

Most CFOs would agree that a service business consistently producing a 61% gross service margin consistently is doing pretty well.

The 4 levers within a service desk manager’s control

They are four elements—I call them levers—that a service desk manager can adjust to directly affect gross service margin.

Lever 1: Engineer pay rate
Lever 2: Billing rate
Lever 3: Utilization
Lever 4: Agreement efficiency ratio

Most of us are accustomed to working with the first three levers. For example:

  • If you keep engineer pay rate and billing rate in the same place, and move utilization up, gross margin goes up.
  • If you keep billing rate and utilization stationary, and move engineer pay up, margin goes down.
  • If you keep pay rate and utilization stationary, and increase your billing rate, margin goes up.

Lever 1: Utilization

Here’s a table that shows typical non-working and non-billable hours per year.

MSP service margins working & billable hours table

Source: Sea-Level Consulting

Keep in mind that 76% is the yearly billable percentage. When you consider that for vacation, sick days, holidays, and training they are 0% billable, they actually have to be near 83% billable to average 76% at the end of the year.

Not accounting for 0% billable time is a common mistake we see. Service executives aim for 75% billable for the days their engineers are at work and end up with yearly averages in the mid-60s.

Levers 2 & 3: Billing rate & engineer pay

Before we can cover billing rates and engineer pay, we need to discuss the term engineer billing multiple. And before we get to that, we should quickly review the three different kinds of service:

  • Technical services: Ad hoc, non-contract, break/fix and staff augmentation
  • Professional services: Scoped project work
  • Managed services: SLA-based, all you can eat, recurring contract revenue

Technical services are typically the lowest margin service because they’re fraught with inefficiencies like driving all over town and back again, constant non-billable travel to find parts, high skill resources needed to deliver entry level work, call backs, disputed invoices, and on and on.

Professional service margins are better because work is scoped in advance, parts are ordered in advance, engineers are typically scheduled in four to eight-hour chunks, which drives up utilization and is scheduled to match the skills required.

Managed services promise even better margins because most of the travel is cut out, and we leverage tools, technology, standard processes, and standardized equipment. In many ways, standardization allows lower cost resources to deliver better services.

Calculating the engineer billing multiple

Here’s how you calculate your engineer billing multiple.

Engineer billing multiple = Total service billing / Engineer pay

For example, if an engineer bills $15,000 monthly and is paid $5,000 monthly, their multiple is 3.0.

In the table below, you can see that technical service with a 50% labor cost amounts to a 2.0 engineer billing multiple. Another way of saying it is the engineer bills twice his cost.

Professional service engineers should have about a 40% labor cost or a 2.5 engineer billing multiple.

Managed services, if you’re really leveraging your tools and processes, promises a 25% labor cost, or a 4.0 engineer billing multiple.

Note that a typical IT solution provider is doing some mix of technical, professional, and managed services, and that a 3.0 multiple is the goal for all these services combined.

MSP service margins engineer billing multiple table

Source: Sea-Level Consulting

The chart below illustrates the relationship between utilization, billing rate, and engineer pay.

Let’s assume a 3.0 multiple is our target, and we can expect an engineer to bill 76% of his time. We can see that an engineer earning $60,000 per year will need to bill $115/hour for 76% of his available time to achieve a 3.0 engineer billing multiple.

utilization

Source: Sea-Level Consuting

Many companies that we coach at Sea-Level discover they have a big problem with one of the first three levers simply be reviewing this chart.

Lever 4: Agreement efficiency ratio

Now, let’s talk about the elusive and misunderstood 4th lever of gross service margin: agreement efficiency ratio. At Sea-Level, we spend a lot of time coaching IT solution providers on how to understand, measure, and use this lever.

Here’s how you calculate agreement efficiency ratio:

Agreement efficiency ratio = Flat fee billing amount / shadow billable

Shadow billable is the amount you would have billed the client at your normal billing rates for time and materials work. For example, if your normal billing rate is $150/hour and you work for three hours, your shadow billable is $450.

Flat fee billing amount can be a ticket, project, or agreement. Let’s say you have a flat fee agreement with your client for $1,500/month and your normal billing rate is $150/hour.

In month 1, you do 5 hours of work (shadow billable) to earn the $1,500 flat fee. Your agreement efficiency ratio for this month is 2.0 — 1500/(150*5).

In month 2, you do 20 hours of work to earn the $1,500 flat fee. Your agreement efficiency ratio for this month is 0.5 — 1500/(150*20).

In month 1, you’re doing half the work you “should be” and in month 2, you’re doing twice the work you “should be” for the flat fee you’re paid. Another way of saying this is that in month 1 you’re working for twice your normal billing rate, but in month 2 you’re working for half your normal billing rate.

At Sea-Level, we consider 1.25 to be the optimum efficiency ratio. In other words, you ideally have $800 of shadow billable for every $1,000 earned in in flat fees. This effectively create a 25% “premium” for taking on the risk of the flat fee service.

Tying the 4 levers together

Let’s set some values for our levers to create an example.

Lever 1: Utilization | 76%
Lever 2: Billing rate | $150/hr
Lever 3: Engineer pay rate | $60,000/yr
Lever 4: Agreement efficiency ratio | 1.0

First, we’ll break down some of these numbers:

  • An engineer earning $5,000/month with a target 3.0 billing multiple should be billing $15,000 per month (5,000*3).
  • $15,000/month divided by $150/hour tells us we require 100 hours of billing per month.
  • 40 hours/week times 52 weeks is 2,080 hours per year. 2,080 hours/year divided by 12 months is 173.33 hours/month.
  • 173.33 hours/month divided by 100 billing hours means we need to be 58% billable to achieve our 3.0 billing multiple.
  • 76% billable would mean the engineer should be billing 132 hours per month.

This all assumes a 1.0 efficiency ratio ($1000 flat fee / $1000 shadow billable).

The following chart shows what can happen as you manage and adjust your 4th lever: agreement efficiency ratio.

service margin MSP efficiency ratio metric chart

Source: Sea-Level Consulting

Look at line 28 where the efficiency ratio is 0.5. The effective billing rate is half your normal billing rate. Your engineer will need to be 115% billable—to bill for 200 hours out of the 173 available—to bill three times their pay. We can agree this is not reasonable. If the engineer is 76% billable (132 hours), his total billings will be $9,925 for the month.

Look at Line 38 where the efficiency ratio is 1.0. The effective billing rate is exactly your normal billing rate. Your engineer will need to be 58% billable—to bill for 100 hours out of the 173 available—to bill three times their pay. We can agree this is reasonable. If the engineer is 76% billable (132 hours), his total billings will be $19,850 for the month.

Now let’s look at the power of managing the 4th Lever on Line 43 where the efficiency ratio is 1.25. The effective billing rate is 1.25 times your normal billing rate. Your engineer will need to be 46% billable—to bill for 80 hours out of the 173 available—to bill three times their pay. We can agree this also is reasonable. If the engineer is 76% billable (132 hours), his total billings will be $24,812.50 for the month.

There’s a massive difference in gross service margin between $9,925 and $24,812.50 billing. Three of the levers didn’t change, but by moving the 4th lever, you had a big impact on revenue without changing costs. In other words, you had a big impact on margin.

Sea-Level has coached a couple hundred companies in the last seven years. When we start, the average agreement efficiency ratio we find is around 0.7, meaning our clients are working for about 70% of their normal billing rates when servicing their flat fee agreements.

Spending management effort to control the 4th lever on flat fee tickets, projects, and agreements mean you’ll spend fewer engineer hours to earn the flat fee. This frees up your engineer’s time to do more time and materials work, more projects, or to service more new agreement revenue with the same labor costs.

Now, imagine dumping that utilization-based engineer bonus plan you have (since it only promotes burying more time in flat fee agreements) with a plan aligned to the 4th lever, efficiency ratio. Imagine the places you’ll go!


Hear Rex Frank explain the 4 service margin levers in our Frankly MSP podcast interview.

The post The 4 Service Margin Levers an MSP Can Control appeared first on Auvik Networks.

How to Deal With Inheriting a Bad IT Setup

$
0
0

While winning a new client is a scenario all IT solution providers and managed service providers (MSPs) relish, the real work always starts during onboarding.

Documenting the client’s IT infrastructure, shoring up existing systems, and eliminating any immediate and urgent issues you spot can be a challenge. You want to make a great first impression, and you definitely don’t want them to experience issues or outages within their first few days of working with you.

Taking on a new client can be therefore be stressful. But there’s one scenario that seems to heighten stress levels to the nth degree for many MSPs.

When taking on a new client, how do you deal with inheriting a bad IT setup?

When IT advice goes wrong

Unlike the accountancy, law, or health professions, there are few professional qualifications necessary for entering the IT industry.

As a result, it’s all too common a scenario for your MSP business to take on board a client whose IT infrastructure seems to have grown as a result of advice given by:

  • the warehouse manager
  • the local laptop repair shop
  • the boss’s nephew who is “very good with computers”

Sadly, there’s also one very familiar culprit for poor advice: an incumbent IT provider who didn’t really know what they were doing.

Most often, the advice your new client received from their previous IT advisors wasn’t given with any malice intended. Many IT part-timers have great experience with PCs and perhaps even small networks, but find themselves out of their depth when offering advice to help a growing business meet their technology needs.

The result? A mish-mash of technologies your new client has somehow limped along with.

When looking at this hodge-podge, your first instinct as a professional IT advisor might be to take a broom and sweep everything away. Starting fresh with a new business-grade, fit-for-purpose IT infrastructure can seem easier and better than trying to rectify the multitude of sins you’ve inherited.

But tread carefully when laying out plans for any new IT infrastructure. It’s easy to cause offence to your new client, even if none is intended.

Don’t call your client’s IT systems ugly!

If you wade in and tell your new client their existing system isn’t fit for purpose, it’s worth being aware that you’re not only belittling the infrastructure your client already owns, you’re also belittling the people who made the decisions to buy or implement that infrastructure.

While the advice for the purchases may have come from a third party—the IT part-timer, the incumbent IT provider, or even the boss’s nephew—remember that only one person gave the go-ahead for those IT purchases: your new client.

If you choose to tell your client their existing IT infrastructure is rubbish, you’re taking a risk that what the business owner or decision-maker hears is, “You’ve made some bad decisions.”

While some pragmatic decision-makers might appreciate such a blunt approach, the vast majority of us don’t like our poor decisions being highlighted by others. Would you? Most of us hate the idea of writing off investments made poorly, and we often cling to these “sunk costs.”

At best, you’ll find you experience resistance to your ideas for new infrastructure. At worst, you’ll find you alienate the decision-maker and they dig in their heels in, insisting you work with them to remediate the existing setup.

Either way, you’re in for a very frustrating experience.

Be tactful when discussing new IT infrastructure

No business owner makes decisions they know aren’t right for the growth of their business. Likewise, very few IT advisors give advice they don’t believe is in the best interests of their client.

With this in mind, don’t go in all guns blazing and belittle the previous IT advisors. Don’t bad mouth them. Don’t point out how they offered poor advice. Nobody appreciates this.

Likewise, don’t tell your new client they’ve made bad decisions with their IT. Don’t highlight how they’ve wasted money, made poor choices, or listened to idiotic advice.

Instead, appreciate the reality of the situation—your new client sought the best advice they could find at the time.

They implemented systems they felt would suit them for their immediate needs.

They purchased the equipment and software they felt would work for their business at the time.

When approaching the conversation about the client’s existing IT infrastructure, start the conversation by acknowledging that you can see how the client made the choices they thought were best for them at the time. Point out that their existing IT infrastructure has brought them to where they are now, but they’re clearly looking to work with a new IT advisor—you—for a reason.

The bottom line here is not to make the client feel stupid for taking bad advice or making poor purchasing decisions. They did the best they could with the information they had.

Now, however, you can work with them to implement new IT infrastructure that will help their business as it is today. New IT infrastructure that will help their business overcome the pains and irritations they’re experiencing today. New IT infrastructure that can help their business grow.

Don’t focus on how poor the existing infrastructure is. Instead, work with them on the goal of building new infrastructure to support them today and into the future.

By doing so, you’ll find a lot less resistance to change.

Prove you’re a partner

It can be frustrating for you as a professional, progressive MSP to inherit IT infrastructure that, quite frankly, stinks.

While you might be bemused—or worse, professionally offended—that anybody could have suggested IT infrastructure that clearly isn’t fit for purpose, you’ll do yourself no favors by highlighting this fact to your new client. After all, they’re the ones who gave the go-ahead for the infrastructure to be implemented.

Put ego to one side. Resist the urge to prove your technological and business superiority to the client’s previous IT advisors.

Instead, acknowledge that the existing IT infrastructure grew as it did out of choice—and those choices seemed like the right ones for the client at the time.

Prove to your new client that you’re their partner as they grow their business, a partner who can be trusted over the long-term.

The post How to Deal With Inheriting a Bad IT Setup appeared first on Auvik Networks.

3 Minor Network Alerts You Shouldn’t Ignore

$
0
0

When you put Auvik on a network for the first time, the software automatically starts monitoring that network for more than 40 potential issues.

When Auvik finds an issue, it triggers an alert. Network alerts range in severity from emergency at the top all the way down to informational.

As you work with Auvik, you may see a lot of alerts coming your way. It’s obvious you need to deal with the emergency and critical alerts. But what about the simple warnings and informational alerts?

Your first impulse may be to turn them off or turn them down. But wait!

These warnings shouldn’t be dismissed so quickly. In particular, I want to look at three fairly common warning alerts that could be pointing you to a bigger issue that’s brewing:

Warning: There’s something going on here

If you’ve left the default thresholds for these three alerts, you’ll see notifications when:

  • More than 10,000 packets have been discarded on a device within 5 minutes
  • More than 10,000 packet errors have occurred on a device within 5 minutes
  • Interface utilization on a device is over 80%

And if you look at the Auvik Knowledge Base articles for these alerts, you’ll see there are a lot of possible culprits—we list 23! For example, the alerts could be caused by:

  • Misconfigurations that have existed for years but are finally being exposed
  • Changes that were made to the network without your knowledge
  • A poorly sized network connection that’s now reaching its limits
  • A hardware or wiring issue, like an Ethernet cable that someone stretched just an inch too far

These are not things that should be ignored for too long. An unanticipated change to the network could create a rippling set of problems over time. A stretched cord is a big accident just waiting to happen.

Now’s your chance to deal with the issue when it’s still small.

Searching for root cause

So now what?

First, take a breath. Now let’s ask some questions to see if we can narrow the search:

  • Is there only one alert that’s occurring, or are you seeing multiple different alerts that may be related?
  • What types of devices are the alerts affecting? Are the alerts limited to one interface or one device, or are they affecting devices across the network?
  • What patterns do you see? Are there alerts that trigger every day at the same time? Or does the alert get triggered on the hour every hour? Some patterns are harder to see than others but try to take a high-level view to spot the trends.
  • Did you get any client calls that coincided with the alerts? Maybe something about a dropped VoIP call, a laggy Wi-Fi connection, or a web page taking forever to load?

The answers will hopefully start to point you in the right direction. But alerts don’t always paint the whole picture, and now we may need to dig deeper to identify the root issue.

Look for changes. Did the configuration on the network recently change? Check device configuration history in Auvik for anything recent.

Look for misconfigurations. Did Auvik also alert you about a potential misconfiguration somewhere, such as an interface duplex mismatch or a VLAN with no interfaces?

Look externally. This is an especially good place to look when dealing with a client who uses many external applications or if the alerts come with a complaint that the internet is slow. It may an ISP issue. Use Auvik to check on ISP performance.

Making the fix

Once you’ve tracked down a cause and isolated a culprit, it’s time to put a fix in place. Ultimately, the resolution will vary based on what you’ve identified through your troubleshooting.

Keep in mind there may be multiple issues at play, and you may not resolve the issue completely in the first attempt. That’s OK—rinse and repeat based on the new evidence until you’ve got nailed.

Finally, if you’ve tried everything and haven’t uncovered the cause, don’t give up. If nothing else works, remember that data doesn’t lie. You might need to spin up a packet capture tool like WireShark and collect the traffic itself. The cause will be in there somewhere.

The post 3 Minor Network Alerts You Shouldn’t Ignore appeared first on Auvik Networks.

3 MSP Insights From the CompTIA Industry Outlook 2018 Report

$
0
0

Businesses spent $4.5 trillion worldwide on IT in 2017—and according to CompTIA, that’s going to rise to $4.8 trillion in 2018, as the industry is forecasted to grow 5%.

But what does that mean for MSPs?

On one hand, technology is being more widely adopted by businesses which bodes well for service providers. However, there will also be challenges in bringing more and new technology into the mainstream. As CompTIA reports in its 2018 Industry Outlook:

“A scan of the 2018 horizon reveals a year that appears to be on the cusp of profound change. And yet, the closer a major leap forward seems, the more one is reminded of the last-mile challenges associated with next generation innovation.”

It’s clear the managed services industry is maturing and evolving. Emerging technologies are more accessible, businesses are increasingly being led by executives who understand tech, cloud adoption is rising—and the list of changes goes on.

Here are three key insights you should consider from the CompTIA industry report to prepare your MSP for the year ahead.

  1. AI will be the biggest driver of MSP change

    Of all emerging technologies, artificial intelligence (AI) is most likely to change to the IT ecosystem. CompTIA says tech solutions are becoming increasingly complex for businesses, because as technological literacy rises, demands for IT tools become more specific.

    So, as more and more businesses request AI, service providers will have to respond and evolve. Significant computer power is needed to support AI, which means the cloud is a better host than servers. And, by adding an additional layer of intelligence, MSPs will need to be able to both support a more complex infrastructure and solve new problems.

  2. Revenue generation isn’t enough

    CompTIA says generating revenue isn’t enough for a service provider business to survive in the current ecosystem, especially if they’re undergoing business transformation. To optimize revenue and successfully evolve, MSPs must be operating at maximum efficiency.

    Achieving this, CompTIA says, “includes fine-tuning the business across a range of areas, from sales, marketing and finance to human resources and supply chain logistics… The key to operational efficiency requires an unflinching examination of how well—or not—operations run today, especially before taking the plunge into a new business model.”

    More than half of surveyed businesses said two vital best practices to reach maximum efficiency include:

    • Calculating ROI/time to profitability before launching a new project, and
    • Using repeatable processes throughout the company.

    For example, your MSP can boost its efficiency by automating network management—you’ll have more time to focus on value-added activities rather than time-intensive manual network tasks.

  3. There’s not enough tech talent

    Demand for tech talent will continue to exceed supply in 2018, CompTIA says. The report indicates nearly four in 10 U.S. IT companies have job openings they’re recruiting for in IT.

    Emerging technologies like the Internet of Things and AI are driving the need for IT pros, as are business areas like software development and data. Brand new roles are being created in the IT space to match the changes affecting the industry—such as titles like AI developer, blockchain developer, machine learning trainer, and cybersecurity architect.

    Compounding the issue, recruiting for hard-to-fill skills means employers may have to post multiple listings in several markets. There’s no doubt that finding, recruiting, and retaining great talent will continue be a major challenge for MSPs in 2018 and beyond.

For more insights, including assessments of IoT, cybersecurity, and subscription pricing, see the full CompTIA IT Industry Outlook 2018 report.

The post 3 MSP Insights From the CompTIA Industry Outlook 2018 Report appeared first on Auvik Networks.

Auvik Use Case #8: Troubleshooting Internet Connections

$
0
0

No doubt you’ve received that call from a client: “My internet is slow!” Or maybe, “My internet is down!”

With Auvik’s Internet Connection Check, you can get out in front of such issues so you’re aware of the problem—and potentially have a solution—before the client calls.

The connection check, also known as the cloud ping, is tied to two Auvik alerts.

  1. Internet Connection Is Lost lets you know when we can no longer reach your client’s default gateway. Auvik classifies this as a critical alert.
  2. The Default Route Change alert lets you know that your client’s primary internet connection has failed over to a secondary connection. Auvik classifies this as a warning alert.

Internet Connection Is Lost

Here’s how James Ritter, founder & CEO of Pulse Technology Solutions, uses the cloud ping to quickly help clients:

“We’re in the lightning capital of the world down here [in Florida] and when we have bad weather in the area, I can tell with Auvik when our internet providers are going up or down from the gateways going up and down. Then we can proactively call the ISP and say, “Look, are you having an outage?” That helps alleviate stress from the client, because we already know what happened and have an ETA for the internet being back up. That’s extremely valuable for us.”

Jordan Farmer at Dash2 Group echoes James’ comments:

“Auvik really helps with knowing about internet outages well before a user even picks up the phone and calls us. I highly recommend Auvik to anyone.”

Default Route Change

The Default Route Change alert can also provide valuable intel. As we’ve discussed before, warning alerts can point you to potential bigger issues that are brewing. In this case, you’ll want to stay on top of making sure the primary connection is eventually restored to service and the route is changed back to that primary connection.

One of our partners, before they had Auvik in place, found out the hard way what can happen when you’re not alerted to such route changes.

“One of our clients who had two internet connections had an outage—because their primary connection had been down for three months and we didn’t know it. We just didn’t know because all the systems were reporting in and the provider doesn’t bother to tell the customer and the customer didn’t know.

So Auvik being able to detect both gateways and tell us when the primary is down and the secondary took over, or the secondary’s down and the primary’s still up, is huge because now we can address that problem before it becomes a problem.”

Is it the network… or is it the ISP?

Auvik’s Internet Connection Check can also help with troubleshooting network problems. You can see Packet Loss and Round Trip Time charts for each interface, giving you a view into jitter and latency.

Auvik internet connection check packet loss chart

Auvik’s internet connection packet loss chart


Auvik internet connection check round trip time chart

Auvik internet connection check round trip time chart

For example, if you have a VoIP network experiencing issues, these two graphs are a handy way to see if the quality problem is related to the internal network or something to do with the ISP. High levels of either latency or jitter could indicate it’s ISP related.

ISPs can measure to their equipment—the final router before the client—but the “last mile” between them and the physical client is where things can go sideways. Is it the firewall? is it inside the network? Is it the link from the firewall to that ISP router? Without data, how can you pinpoint the culprit? If you can show a jump in latency or a 50% packet loss, then you have the data to get your ticket with the ISP escalated to someone who can dig in and troubleshoot.

You’ll find more information on using the Internet Connection Check in the Auvik Knowledge Base.

The post Auvik Use Case #8: Troubleshooting Internet Connections appeared first on Auvik Networks.

An Introduction to APIs for MSPs

$
0
0

Think about systems and workflows that are universal, like the ones that manage commercial aviation.

Whether you fly United, Air France, or Emirates, all of these airlines need to be able to communicate with each other for a number of different purposes, from layovers and baggage transfers to security.

But under the hood, each airline’s systems are architected slightly differently, with different software, databases, and other key pieces of infrastructure. How can we get these systems to talk to each other?

They may not natively speak the same language. But through some careful planning and collaboration between the maintainers of each platform, a common set of instructions, inputs, and outputs can be agreed upon to remove the complexity and uncertainty of one system telling another one what to do.

What’s an API?

API stands for application programming interface. (For a nitty gritty technical explanation, try this API for Beginners video). But really, all you need to know for now is that an API is a clearly defined set of instructions that defines how one system can interact with another.

There are different styles of APIs that have gained and lost prominence over the last few decades. For the purposes of our discussion, we’ll talk about Representative State Transfer (REST-ful) APIs, which have emerged as this decade’s most popular web service API. There are many other more developer-centric APIs, like library or class-based APIs, that we’ll keep out of scope.

Modern APIs expose what are called endpoints, or paths that represent an object or collection of objects, say, the hostname of your largest client’s main firewall.

Each endpoint typically allows for (at least one of) a set of CRUD actions. CRUD stands for create, read, update, and delete. In our hostname example, using an API, you could invoke an action to create a new hostname, read the existing hostname, update the existing hostname, or delete the existing hostname.

Until now, the network’s been stuck in the Stone Age

APIs have deeply penetrated the endpoint space of servers and workstations. Take a Windows application developer. In each area where an application needs to plug or hook into Windows, the developer can use an appropriate set of API endpoints.

But to date, APIs have been few and far between on the network.

Recall some of the legacy methods we’ve been using over the years to communicate with network devices:

  • CLI
  • Web GUIs
  • SNMP
  • Vendor-specific apps

Think about just how fragile and frustrating these communication methods can be to use. Syntax can change from version to version of firmware. With different hardware configurations, you need to reference different components differently. Modern web-based technologies like JavaScript prevent easy scraping of data. SNMP implementations require in-depth knowledge of how to navigate the maze of MIBs and OIDs—not to mention there’s a large amount of incomplete and incorrect data that surfaces through SNMP.

Performing a “simple” task like adding a rule on a firewall has traditionally been cumbersome. But with a clearly defined set of API endpoints, I could update a set of firewall rules without having to navigate through a CLI configuration hierarchy and issue a series of commands.

Here’s an example of a cURL request I could make to the Meraki Dashboard API:

curl -L -H 'X-Cisco-Meraki-API-Key: ' -X PUT -H 
'Content-Type: application/json' --data-binary 
'{"rules":[{"comment":"A note about the rule", 
"policy":"deny", "protocol":"tcp", "destPort":"80,443", "destCidr":"192.168.1.0/24,192.168.2.0/24", "srcPort":"any", 
"srcCidr":"any", 
"syslogEnabled":true}],"syslogDefaultRule":true}' 'https://api.meraki.com/organizations/[organizationId]/vpnFirewallRules'

Within this relatively small request, we’re specifying the following:

  • comment
  • description of the rule (optional)
  • policy
  • allow or deny traffic specified by this rule
  • protocol
  • type of protocol (must be tcp, udp, icmp or any)
  • srcPort
  • comma-separated list of source port(s)
  • srcCidr
  • comma-separated list of source IP address(es)
  • destPort
  • comma-separated list of destination port(s)
  • destCidr
  • comma-separated list of destination IP address(es)
  • syslogEnabled
  • log this rule to syslog

All done within a single command. How would this look on the CLI?

We’d have to craft something like this (for a Juniper SRX):

set interfaces fe-0/0/0 unit 0 family inet filter input blocked.IP
set interfaces fe-0/0/0 unit 0 family inet filter output blocked.IP

set policy-options prefix-list block.outbound 192.168.1.1/24
set policy-options prefix-list block.outbound 192.168.2.1/24
set policy-options prefix-list unblock.outbound

set security log mode stream
set security log source-address 192.168.1.254
set security log stream Server format syslog
set security log stream Server host 192.168.1.1
set security log stream Server host port 123

set firewall family inet filter blocked.IP term 1 from prefix-list block.outbound
set firewall family inet filter blocked.IP term 1 from prefix-list unblock.outbound except
set firewall family inet filter blocked.IP term 1 then syslog
set firewall family inet filter blocked.IP term 1 then discard
set firewall family inet filter blocked.IP term 2 then accept

See how much more concise the API query is?

network API MSP sunrise new age

Photo: Pexels

The new age of APIs in the SMB network space

We all know the network layer is fundamental as a base for applications and services to ride on.

For the longest time, we’ve seen servers, workstations, and even newer device types like smartphones, give developers increasingly sophisticated and data-driven ways to interface with their operating systems.

But network gear like your routers, switches, and firewalls have been relegated to using archaic methods like the command line interface (CLI), a web GUI that requires some annoying plugin like Java or Flash to show all content correctly, or a native application like Cisco ASDM to perform relevant configuration.

More recently, though, network vendors have begun exposing APIs, either as a supplement or exclusive replacement to the CLI. (GUIs are here to stay).

Vendors with a cloud management component to their offering, like Meraki and Datto / OpenMesh, have developer APIs. We’ve also seen some on-prem products from the likes of Barracuda and Sophos that allow interaction with devices using an API.

These APIs make possible some powerful automated workflows, such as bulk resetting passwords, initiating firmware upgrades, and provisioning VLANs for a new application (like voice or video) throughout the network stack in a matter of seconds instead of minutes or hours. As a result, APIs free up your technicians to devote their time and expertise to helping you scale your business.

What does this mean for MSPs?

When replacing or augmenting certain areas of your MSP tool stack, look for products and vendors that have opened up their platform with APIs. This will give you the flexibility of integrating other products into your stack, and potentially developing automation to eliminate repetitive tasks that are better left to a machine.

The more information you can extract, transform, and load from one system into another, the more you open up possibilities for automation and more seamless workflows. Open APIs promote interoperability from one vendor toolset to another, allowing tight integrations.

The end goal here is automation nirvana, with historically strenuous and humanly unscalable tasks being automated, allowing you to focus on growing your MSP practice instead of poring over spreadsheets and attempting manual reconciliation.

What’s Auvik’s API strategy?

At Auvik, we’re huge advocates for the potential breadth and depth of data injection and extraction that APIs allow. We’re working on a number of APIs that will allow our partners and third-party vendors to tightly integrate with Auvik. Watch for developments throughout 2018.

The post An Introduction to APIs for MSPs appeared first on Auvik Networks.

The Deliciously Effective Way to Build Better Relationships With Your MSP Clients

$
0
0

Running an IT solution provider or managed service provider (MSP) business would be easy—if you didn’t have any staff or clients to manage!

That’s a common joke I hear from IT business owners, but it’s based in truth. Your MSP business would be a lot easier to run if it didn’t have those frustrating humans getting in the way, right?

It doesn’t have to be so difficult, of course.

I’ve written before about how best to manage some of your acute staffing challenges. In this article, I want to look at how you can make life easier for yourself dealing with that most essential element of any MSP business: your clients.

It all comes down to how you educate them on working with you.

Common frustrations of working with clients

One of the most common frustrations MSPs experience when working with clients is that they don’t consider IT when they’re growing their business.

How many of us are familiar with the scenario of receiving a phone call on a Monday morning that goes something like this:

Client: “Hi, we’ve got a new employee that needs setting up with accounts and a new PC.”
Help desk: “Sure, we can help with that! When do they start?”
Client: “This morning. They’ve just arrived. When can you set them up?”

You then have two choices.

You can cite your process. Perhaps you need 48-hours warning of new employees, so in this case you’ll be ready to have that new employee up and running by Wednesday. But in standing your ground in this way, you’ll incur the ire of the client who won’t understand why you can’t quickly deal with such a “simple” enquiry.

That leaves the alternative of you scrambling around to source a PC and set up a new user immediately, perhaps pulling staff away from other tasks and projects to fulfill this urgent requirement.

It’s not like Monday mornings are busy for your service desk or anything, right?

This is just one common scenario that every service desk is familiar with. So how can you educate your client about working with you in a better way?

Why you should have regular conversations with your clients

When was the last time you sat down and talked with your client about their business?

I’m not talking here about reviewing outstanding service desk tickets, upcoming projects, or potential new technologies with them.

I’m asking when was the last time you picked up the phone, or went and visited your client, to ask, “How’s business going?”

Some MSPs have a routine of quarterly business reviews (QBRs) which prompt these conversations. QBRs are scheduled meetings between you and your client to review what has happened, and to set the stage for what’s coming up.

But even if you don’t schedule regular QBRs, you can easily have these conversations with clients by popping by to see them and asking a simple question: “How is business treating you?”

You might suggest it’s rude to simply turn up at one of your client sites for this purpose.

To this I say, donuts help!

The benefits of donut drop-ins

I find arming myself with a box of cakes helps turn you into a welcome visitor. No client I’ve ever worked with has said, “It’s not convenient right now” when I drop by their offices with some cupcakes or donuts.

While I’m handing the goodies out to staff (which really won’t hurt your net promoter score results either), I ask them what they’re working on, what’s been frustrating them, and what they’ve got coming up that’s exciting.

The results can be very surprising!

Two conversations I recall having as a result of my donut drop-ins went something along the lines of:

HR department: “Well, we’re currently interviewing for a new position in finance.”
Me: “Good luck! Hey, will they need a new PC when they start? If so, let me know when their start date is and we can help them hit the ground running.”

Finance department: “The printer in the corner is being replaced next week with a multi-function unit. We’ll be glad to get rid of that old piece of junk!”
Me: “Cool! When’s the new machine being installed by the photocopier company? I’ll make sure one of my engineers is on hand to help their engineer install the unit correctly so it works on your network.”

In both of these scenarios, a quick conversation helped me uncover an upcoming project that could have been hurled into the service desk, unannounced, like an unwelcome grenade.

Thanks to a box of donuts and an old-fashioned conversation, I was able to educate my clients on how best to work with us to avoid frustration on both sides.

It’s not magic, but it’s deliciously effective

It’s easy to get frustrated with clients for not understanding the effort you put into ensuring their systems work.

The bottom line is that your clients expect you to help them. But they’re not really sure how they can help you. They often assume it all just happens, as if by magic!

By having regular conversations—through quarterly business reviews, floor-walks, and donut drop-ins—you can help educate your clients on better ways to work together.

The post The Deliciously Effective Way to Build Better Relationships With Your MSP Clients appeared first on Auvik Networks.


Legacy Networks Can Hold You and Your Clients Back

$
0
0

“There are no more greenfield deployments,” says networking expert Tom Hollingsworth.

So with every new client you sign comes legacy network devices. This old equipment can act as a problematic headwind against your ability to help clients evolve to meet strategic goals.

The network “should be a tool to an organization like a set of screwdrivers to a mechanic,” says Douglas Grosfield, the CEO of Waterloo, Canada-based MSP Five Nines IT Solutions.

“If their equipment is not allowing them to take advantage of the latest applications, then they’re fighting the tools all the time instead of focusing on what they should be, which is being competitive in their own space.”

Douglas Grosfield, Five Nines IT Solutions

Douglas Grosfield

Some of the technologies coming down the pike include advancements in the Internet of Things (IoT), artificial intelligence (AI), and cybersecurity. “Aging equipment is simply not able to handle the increased bandwidth requirements” these technologies demand, says Grosfield.

Not all emerging technologies apply to all industries, of course. But that doesn’t mean your client won’t be at a disadvantage if their network can’t support them.

“The tools that are available in things like cybersecurity are rapidly evolving using AI, and if their network can’t handle or doesn’t provide them with the ability to get that data, then they’re going to exacerbate the problem,” Grosfield explains.

The risks of legacy hardware

There are three primary risks clients face with legacy gear, says Dan Conde, a networking analyst at Enterprise Strategy Group.

  1. Their business could be vulnerable to cyber threats.
  2. They may “not be able to deliver modern services to their end customers or internal customers.”
  3. They won’t be “able to run modern services that are up in the cloud.”

Dan Conde, Enterprise Strategy Group

Dan Conde

As more and more businesses adopt cloud-based functions, Conde says clients “need a good network infrastructure that allows them to connect to the cloud.”

Old network devices can also cost clients more money than up-to-date equipment, since legacy infrastructure can demand greater time and effort for you to effectively manage. And that means you end up charging more—or you’re eating those costs, which isn’t great for you.

Moreover, if clients adopt a set-it-and-forget-it attitude, they could give their competitors an opportunity to gain market share, says Grosfield. “If you can’t get your products or services to market quickly enough and efficiently enough through enabling your company with the proper technology, then you’re falling further behind in your own industry.”

But it’s not just your client’s business that could suffer due to old network equipment, he continues. You may lose efficiency too because, “If you’ve got 100 bodies that are taking 15 minutes longer in an eight-hour day because the equipment is old and clunky and not capable of handling the bandwidth anymore, that’s a lot of minutes of inefficiency.”

Not to mention, if the network goes down because it can’t support emerging technologies, says Grosfield, “Your overall user experience is going to suck, and if users aren’t happy with a piece of technology they’re not going to use it. So your adoption rate of the technology’s going to drop and their satisfaction rate’s going to drop. And guess whose fault that is? It’s the tech services folks that take the blame for a poor end-user experience when in fact it’s aging equipment that’s being a bottleneck.”

What’s the delay?

The main problem you could face when you propose to update your clients’ networks is that many SMB clients consider IT an expense rather than an opportunity to evolve, says Grosfield.

“Everybody wants it cheaper and faster all the time, so they buy unmanaged switches. If you start getting a lot of systems on the network that are chatty, you’re generating a lot of traffic. Those older pieces of equipment that have thinner ‘pipes’ on the back panel of the switch are not able to handle the traffic. It’s like stepping on a water hose—you’re squelching the output.”

legacy networks garden hose output

Photo: Pixabay

In contrast, many forward-looking companies believe “networking is not what it does, but what it enables. It enables better insights into security and allows you to do more innovative business transformation,” says Conde.

Legacy network equipment has the potential to impede both you and your clients from evolving. So how do you convince clients that the network is an enabler of strategic initiatives instead of a cost center?

Initiating the upgrade

One way to start the conversation about updating network infrastructure is to tie it back to business outcomes, says Grosfield.

“If you can articulate the benefits that investing in technology or upgrading the hardware or software will have on their business, then it’s a business decision for them. It’s no longer you trying to blind them with talk about technology.”

This is an example of what he calls being a strategic services provider rather than a managed services provider. “The expectation is a lot higher nowadays. You’ve got to be a lot more modular and strategic so you can react quicker. To be a strategic services provider, you can have that advisory role where you can plug into an organization as necessary and deliver more value as a result.”

legacy networks rocket launch value upgrade

Photo: Pixabay

Conde points out you may not have to rip and replace the entire network. “A lot of people say, ‘I’m afraid to change my network because now I have to remove these boxes that I’m accustomed to.’ And you can say no, leave them. Maybe you could just ask those boxes to do a little bit less than they were intended to do and simply route the traffic to another box or a VNF or maybe up to the cloud.”

Grosfield says the alternative to gradual replacement is to be creative in how you enable and charge to upgrade the devices. For example, he says, “Start thinking whether you can package product, software, and hardware so it’s part of their monthly spend with you, and you own that equipment.

“Or partner with leasing firms. Then an infrastructure upgrade is a lease, and that becomes an operating expense instead of the capital expenditure.”

In addition to being a strategic services provider, Grosfield considers himself his clients’ financial advisor “to help them invest in technologies to stay up to speed and give us the ability to deliver the high level of service that we’re promising. Otherwise we have to charge more for the service and it’s less attractive to them.”

Asking clients to implement a significant upgrade can be a hard conversation to have. (You want to be respectful, of course, and not apportion blame about the current state.) But it’s a conversation absolutely worth having.

The post Legacy Networks Can Hold You and Your Clients Back appeared first on Auvik Networks.

The Importance of Network Topology Visualization

$
0
0

Is that a bird’s nest or is it spaghetti? Have you ever asked that question when looking into a IT wiring closet?

At some point in your tenure as a trusted IT service provider, you’ve likely been asked to inventory a network and create a network topology diagram.

Traditionally, you’d pull out a piece of paper and a pencil, and start drawing boxes based on what you remember last time you were on site.

Or maybe you’ve fired up Visio and started creating a block-level diagram of various network devices based on a site survey.

Either way, you’re spending hours or days on site manually inventorying the network, poking into the CLI, drawing pictures, and adding labels.

Let’s face it—nobody likes documentation. It’s a pain to build and a pain to maintain. But we need it. Network topology visualization is crucial to effectively managing a network.

Here are four times when a real-time map of a client’s network can save your team a ton of time, money, and client good will.

  1. Onboarding new staff

    You’re growing again. That’s great! It’s a sign your MSP is doing the right things: increasing your client count, increasing your managed device count.

    But bringing a new staff member up to speed is no small feat. Not only do new hires have to learn a new company and new culture, they also need contextual and technical data about all the clients they’ll be touching.

    Your most tenured folks know your clients’ networks well. They’ve probably been on site many times. They may have even set up the network! But how do you transfer that wealth of data from one brain to another?

    A real-time network topology map gives immediate context to any staff member. They’ll be able to see at a glance exactly what gear a client is using, how the network is wired and configured, and how devices performing.

  2. Diagnosing basic connectivity problems

    You get a call at the help desk: Your biggest client is also growing. Their management team has been moving desks around to accommodate the growth. Naturally, while moving desks around, they unplugged all the Ethernet cables and moved all the VoIP phones—and didn’t track any of it.

    At one of the new desks, they’ve plugged a user’s computer and phone back into an Ethernet jack. But there’s a problem: The computer works but the phone doesn’t.

    Of course, the cable drops aren’t labelled. And you don’t have someone on site who can interpret the patch panels anyway. Time to fire up the real-time topology map!

    Quickly, you find the computer by IP address, and see that it’s connected into port 14 on switch 2. Hovering over the wire, you see that port 14 is configured for the data VLAN, but not the voice VLAN.

    Jumping now into the management interface of switch 2, you enable the voice VLAN and the new employee is off to the races. What could have been an on-site visit became a five- minute task with a map.

  3. Investigating network slowness

    The network is slow! It’s being reported by five different users at your client site, all calling in at different times. Even for an experienced tech, this might take a few hours to investigate—hopefully without an on-site visit, at least.

    Time to dig into your real-time network diagram again.

    On the map, you immediately see that all five users are plugged into the same switch, which happens to be at the very end of a daisy-chain of switches. And one of those switches has its uplink speed fixed at 100 Mbit/s instead of the expected auto-negotiate at 1 Gbit/s.

    A quick change and the problem is solved. You just saved a truck roll and resolved the infamous “my network is slow” problem in less time than it takes to get a coffee.

  4. Onboarding a new client

    A new client—congrats! You’ve already done some initial investigation, issued a quote, and won the deal, but now you need to eliminate any surprises.

    Rolling out a network topology mapping tool will discover the client’s network and all their assets in under an hour. No tracing cables, no reading MAC forwarding tables, and no pencil and paper. Remember when you had to budget five or more billable hours to build that topology diagram? Not anymore. Repurpose those hours and increase your margins.

    The topology gives you quick visibility into how the network is laid out and any potential misconfigurations you need to clean up.

    It may also uncover additional devices that came online since your initial proposal. Did the client forget to mention the three switches at the back of the building? That can hurt margins.

    As you find misconfigurations, make fixes. As you find devices, update the agreement. The diagram automatically updates in real-time to show you the progress.

Network topology visualization can provide instant client context to service techs, speed up training time, expedite troubleshooting, avoid costly downtime, and improve your overall client satisfaction. Once you’ve used a real-time map, you’ll wonder how you and your team ever managed without it.

The post The Importance of Network Topology Visualization appeared first on Auvik Networks.

How to Secure That Small Business WLAN

$
0
0

Too often, small business equals small budget when it comes to doing wireless, but sometimes just knowing what can be done goes further than more money.

There are many ways to approach wireless security, and businesses of any size have options. Let’s talk about how I approach my own small- and mid-sized business (SMB) settings when it comes to security and Wi-Fi.

Requirements: Everybody’s got ‘em

It all starts with understanding your client’s business operations:

  • What network devices and applications are used in their environment?
  • How many wireless networks do they have? How many should they have?
  • What does the connected LAN look like?
  • Do they have PCI or HIPAA concerns?
  • Do they want to provide a guest SSID?
  • Do they want their employees to have the freedom to use their own devices on the network?

Write it all down. Think about it, refine it, and ultimately, use it shape how they require the network environment (and all its distinct little buckets of users and devices) to work. Until you do that, it’s hard to implement configurations and solutions.

Don’t cross the streams

Regardless of the size of the business environment, you absolutely don’t want guest traffic mixing with the point of sale terminals. Nor do you want the electronic medical records (EMR) terminals sharing a subnet with the CCTV cameras.

Some things absolutely have to be isolated. It doesn’t mean you need a dozen SSIDs, but it may well mean you have at least two or three.

If your clients have the budget, a NAC system can help keep the number of SSIDs down while figuring out what device should logically go where on the network. But the complexity and total cost of operations here is frequently cost-prohibitive to SMB networks.

The cloud gives options

There are really interesting options for wireless security when you look at cloud offerings. In my Meraki locations (Meraki is just one example here), I can leverage the native RADIUS server to use full 802.1X-based WPA2 enterprise security. All I have to do is add the individual users of the service once.

I can also use PSK capabilities (and change them frequently), add splash pages, set up guest networks and walled gardens, do traffic and rate controls, and even provide the texting of passwords to visitors with no extra boxes on site.

Cloud dashboards and small business networks go together like peas and corn, and the remote monitoring and management afforded by cloud offerings is the icing on the cake.

Don’t backslide

Ideally, the wireless security you put into place will accommodate all the various nuances of your client’s operations, and let you sleep at night. It’s fairly common to go weeks and months with everything running smoothly—until the day comes where someone wants to put a new gadget in the environment. Maybe it’s a Chromecast on the break room TV or a digital sign.

First of all, tell your client they can actually say no to devices they don’t want on the network. But if you do try to accommodate the new oddball device, don’t rush it into the network.

Go back to the requirements you defined, figure out where the new device would work, and more importantly, where it should not be in the wireless mix. Keep the most important devices isolated, and don’t let impulsiveness lead to violations of your client’s security posture.

Wireless security doesn’t end with technology

Any IT environment is a mix of network, devices, applications, and people. Of all of these, the human component is frequently the wildcard when it comes to security.

People ignore or misunderstand policy, plug things into nearby ports, open dodgy emails, and do all manner of irresponsible things at the workplace. If you don’t help your client lay down the law on everything from unauthorized devices to social engineering, then you’re asking for trouble.

Exactly how you secure the WLAN environment will vary depending on your scenario. Regardless of what you run with, understand that you’re never really “done.”
Make sure the security policy is mandatory reading with employee sign-off, whether your client has two employees or 200.

PSK, network device, and user passwords should all be updated at least once a year.

Know that PCI requires annual audits or your client will incur extra fees.

Do refresher training for employees every so often.

Make sure you’re keeping all network component firmware up to date.

Use calendar reminders if you need to for keeping it all straight.

And if it feels like security is a pain to stay on top of, you’re headed in the right direction.

The post How to Secure That Small Business WLAN appeared first on Auvik Networks.

Why CloudFlare’s Latest Product Launch May Pose a Risk to Your Clients’ Networks

$
0
0

CloudFlare recently launched a set of public DNS servers that emphasize privacy and performance for all internet users.

Their new anycasted DNS service allows for interested users to point their internet-connected devices to CloudFlare’s infrastructure as a DNS resolver, replacing or augmenting services from their ISP.

CloudFlare is using IP addresses 1.1.1.1 and 1.0.0.1 as the primary and secondary IPv4 addresses. With vanity addresses like these, coupled with free privacy and performance benefits, we expect the service to become very popular among system administrators.

These specific IP addresses have the numerical appeal of being easy to remember—but there’s one major problem with them: They’re already being used in private networks today.

That means CloudFlare’s new service can create unintended headaches for IT administrators and managed service providers.

Issues with private vs public address space

In order for bits carrying our requests and information to know where to be routed, they need an address to travel to: these are IP addresses.

IP addresses can be categorized as either public or private addresses. (There are exceptions but to keep things simple, let’s stick with public and private for now).

In 1996, the Network Working Group mandated in RFC1918 that certain address ranges (10.0.0.0/8, 192.168.0.0/16, and 172.16.0.0/16) be reserved for use as private addresses. That’s because address space limitations mean every single device can’t have a public IP address.

So system administrators most commonly configure their network equipment such as switches, routers, and firewalls to hand out IP addresses from one of these RFC1918 address spaces to their clients.

In turn, any requests going to a public destination on the internet are translated from private to public network address space using network address translation (NAT).

While the 1.0.0.0/8 address range has always been considered public, both network equipment vendors and system administrators have gotten into the poor habit of using IP addresses within this range for private use.

Depending on the configuration of your network equipment, there could be one or more things your team needs to address address because of CloudFlare’s new DNS service.

To emphasize: these problems aren’t new. But with the launch of a free, consumer-friendly service for a protocol that’s critical for the internet to work, issues may surface more prominently than before.

Here are some of our findings this week as we’ve analyzed the data and read reports from across the web.

Affected: Cisco wireless LAN controllers

If you’re using Cisco Aironet wireless LAN controllers and access points, you’ll want to review your device configurations. Cisco’s default documentation and configuration samples use 1.1.1.1 as the captive portal IP—something that would have worked rather reliably up until now.

But with routes to 1.1.1.1 existing once more within the global internet routing table, your perimeter firewall or router may forward guest users’ captive portal requests onto CloudFlare instead of your wireless LAN controller.

Affected: High availability cluster shared IP addresses

When you configure devices like firewalls to be in a high availability (HA) pair, you’ll typically give each of them:

  • A unique IP address to differentiate them, and
  • A shared address they monitor to service requests from downstream devices.

Regrettably, sysadmins commonly use the 1.0.0.0/24 and 1.1.1.0/24 address space for these purposes.

Affected: ISP modems and gateways

AT&T customers within this DSLReports thread say some DSL modems are routing requests for 1.1.1.1 back to the LAN interface, effectively blocking the forwarding of these packets from the internet. In large quantities, this could cause a denial of service against the modem.

Recommendations for dealing with the CloudFlare announcement

We recommend you be proactive in investigating whether issues can arise on the networks you manage because of the CloudFlare announcement.

  1. Audit each of your client networks to find devices that may be using one of the affected IP addresses, or a variant. (If you’re an Auvik partner, we can help you with this. See the note at the bottom of the article.)
  2. If you see any devices with the affected IP addresses, determine an appropriate new IP addressing scheme based on the application the CloudFlare IPs are currently servicing.
  3. Define a method of procedure (MOP) for implementing the addressing changes. Ideally, you’ll be testing this MOP within a lab environment that mimics production. But if that’s not available, it’s critical you have a rollback procedure properly identified to minimize client downtime.
  4. Schedule a maintenance window with your client(s), and implement the proposed changes.
  5. Document the changes you’ve made to your clients’ networks.
  6. Update any configuration templates or best practices documents you maintain internally to state that public IP address ranges should not be used for private network applications. It would also make sense to:
    • Circulate the updates to all technical members of staff that may have a role in designing or supporting a network in the future.
    • Ensure such documents are reviewed by all new personnel joining the team as part of your onboarding curriculum.

Auvik can help you track down poorly configured IP addresses

If you’re a current Auvik partner, we can help you run a report on which network devices are configured with the affected IP addresses. Simply enable read-only support access from your MSP dashboard, then send an email to support@auvik.com requesting a copy of your personalized report.

If you’re not yet an Auvik partner, contact sales@auvik.com or click the Request a Free Trial button from the Auvik home page to talk to us about how we can help.

The post Why CloudFlare’s Latest Product Launch May Pose a Risk to Your Clients’ Networks appeared first on Auvik Networks.

New WPA3 Security Protocol Holds Promise—But It’s Hard to Know Where It Will Succeed

$
0
0

There’s been a fair amount of buzz around WPA3 since the Wi-Fi Alliance announced a new suite of security enhancements would be introduced for wireless client devices “later” in 2018.

You don’t have to search very hard to find pretty much the same coverage of the news across endless media outlets. The WPA3 announcement rather vaguely promises to bolster Wi-Fi security, mostly for consumer-grade Wi-Fi networks, by the following enhancements:

  • Individualized data encryption: This feature will somehow provide security for users of now-open non-password protected public networks.
  • Stronger encryption: Supposedly asked for by the US government, the addition of 192-bit encryption based on the Commercial National Security Algorithm (CNSA).
  • IoT no-display devices: Right now, if I need to get a gadget like a Wi-Fi lightbulb onto my network, I need to use an app on my mobile device as middleware of sorts. WPA3 promises some undisclosed new and better method of getting these devices on to the WLAN.
  • New protection when weak passwords are in use: If your network is set up with a wimpy password, somehow WPA3 promises to strengthen the ability to defend against attacks that target weak passwords.

This all sounds great at first pass. But as a 20-year veteran of the WLAN industry, I’m a little skeptical on a number of points. I’m not saying WPA3 won’t be good for the greater wireless landscape once more details emerge, but I think there are concerns that come along with WPA3.

The devil is in the details

Right now, we’re all in wait-and-see mode. The WPA3 feature set in some ways sounds too good to be true, and it will probably amount to that for many existing devices.

There’s a good bet that a huge swath of existing devices on both the router/access point and the client sides won’t be able to support WPA3, so hardware may need to be upgraded to get the new functionality. On the one hand, a phase-in period doesn’t seem unreasonable. But this is where we need to pause and consider the current state of the wireless client device space.

Quite frankly, this space has never been more fragmented when it comes to what features are supported by specific client devices. The same WLAN Alliance that’s bringing us WPA3 has also left far too much in the way of “interoperability” to its member organizations, and each one of those member organizations is in the game for sales and profit.

The point? WPA3—at least for a significant period of time—will mean an already feature-fragmented client device space will only get messier.

Another aspect of WPA3 that doesn’t quite sit square with me is that the Wi-Fi Alliance has opted to focus on improving consumer-grade security, rather than trying to reconcile the fact that so many consumer gadgets are making their way into business WLAN environments.

In enterprise Wi-Fi, 802.1X security is the preference. Yet the Wi-Fi Alliance is doing little to close the gap between consumer and enterprise, and in some ways WPA3 only exasperates this situation by hyping itself as a cure for a range of security issues while leaving old issues unaddressed.

Finally, at the risk of revealing myself to be a bit of a conspiracy theorist, I’m not so sure the Commercial National Security Algorithm (CNSA) isn’t a backdoor for government eavesdropping. There’s just too much afoot in this regard right now to ignore the possibility.

If CNSA is some sort of backdoor that only the government can leverage, and if it also increases the general effectiveness of Wi-Fi security as well, so be it. (Call me a kook on this if you’d like—I can take it.)

We just don’t know what we don’t know

Despite my less than warm assessment of WPA3, I’m actually hopeful that something good, practical, and achievable comes of it. Without details, it’s hard to know which parts of WPA3 are most likely to succeed.

It’s worth remembering that not so long ago, Hotspot 2.0 was the great hope for securing public Wi-Fi, and it really didn’t go very far. Hopefully WPA3 does better.

As for IoT devices, I’d like to think WPA3 might offer some hope for a brighter security future here—but not at the expense of making enterprise WLAN environments where IoT devices are used even messier than they are now.

Maybe if the Wi-Fi Alliance had offered more than promises with their WPA3 announcement, it would be easier to get a little more excited about it.

The post New WPA3 Security Protocol Holds Promise—But It’s Hard to Know Where It Will Succeed appeared first on Auvik Networks.

Viewing all 721 articles
Browse latest View live