Cisco Hypershield: Welcome to a New Era!

Cisco has just announced its latest security initiative: Cisco Hypershield! There have been many questions about it, which is understandable. It is not a new device in the usual 19″ 2U format we install in our rack cabinets. It’s an unfamiliar concept, consisting of several relatively new technologies and a new way of thinking about the problem. So, instead of trying to summarize it in a single word, let me start from the beginning.

Once upon a time, firewalls were physical boxes with two, maybe three interfaces. Good and kind people were connected internally, while the evil, nasty hackers were on the outside – and perhaps a MailGateway or Webserver on a DMZ.

Traditional firewall before Cisco Hypershield
A traditional firewall setup – Good and kind people connected internally, while the evil, nasty hackers are on the outside.

These firewalls fit perfectly into the IT architecture of the time, where our machines and data were housed internally – precisely what we needed to protect with our firewall.

As things evolved, we realized that not all outsiders were bad. There were also customers out there whom we wanted to invite into our business. Marketing, WebShops, and Self-service moved online, resulting in significant efficiency gains.

However, this also meant that the firewall had to be moved further into the company to segment the internal systems as needed. Systems with customer access, partners, and various internal units, like clients and servers, were separated on different firewall interfaces to control traffic between them.

It was also a time when an application typically ran on a single machine. In fact, multiple applications might run on the same machine. It was common to see a file and print server, AD, and DNS on the same server. Similarly, an SMTP server (mail between companies) and an IMAP server (clients’ mail access) might share the same physical machine. The firewall sat in the middle of it all, and by using IP addresses and port numbers, one could control access to these applications.

Things are different now. With the advent of cloud and containers came micro-services. This means that all the functions and services previously embedded in applications, like the mail server mentioned above, now appear as micro-services. Your SMTP and IMAP services run in separate containers and are not necessarily on the same machine. They can be spread across multiple servers, even different data centers in various parts of the world.

However, the biggest challenge is that multiple containers are likely running the same service for scalability and redundancy, meaning you never know where the functions are located. The services move around dynamically. It wouldn’t surprise anyone anymore if I said that I am currently sending and receiving my emails in Frankfurt but reading them from an IP address belonging to a data center in Ireland.

But where does that leave our good old firewall? Where do you place your firewall to control traffic, and how do you configure it when you don’t know where your applications and functions are located?

A paradigm shift – new technology and new combinations

We are facing a paradigm shift, and it’s becoming clear that our reliable old firewall can’t quite keep up despite cluster technology and NG features.

Cisco has wisely looked towards the cloud and container world, acquiring new technology and combining it in a fresh, innovative way. There are three primary technologies we will need to familiarize ourselves with in the future. Let’s start with a quick overview:

eBPF – Extended Berkeley Packet Filter

eBPF is a technology that allows dynamic injection of functions into an OS kernel without recompiling it. Unlike programs that run on top of the kernel, eBPF operates within the kernel as part of it. This means you can execute functions with minimal footprint and gain direct access to all system calls, kernel resources, and information. You can access data on active processes and libraries on machines, control access to CPU, RAM, disk, and, crucially, all network packets going in and out of the machine, as well as to and from individual processes and applications. eBPF offers numerous advanced features. For instance, you can create and use custom “tags,” eliminating concerns about IP addresses and port numbers. Additionally, if desired, you can allow or block traffic and redirect it, such as to a proxy.

Originating from the open-source world, eBPF has long supported cloud providers’ business models. In the past, even when multiple applications and services were on the same machine, the CPU was likely idle over 80% of the time, and the disk was only half full. This “resource waste” underpins the business model of cloud providers, who sell CPU cycles and disk space, not the hardware itself.

Controlling processes and resources has been crucial to making this model work. It wouldn’t be good if one customer’s process could kill another’s or read/write to another’s memory or disk. eBPF addresses this problem and can answer questions about system calls, active processes, libraries, and resources. We can leverage all these capabilities and information. We can control what each workload can do and utilize the information.

Remember Log4J in December 2021? A vulnerability appeared in a widespread library, causing many of us to spend our holidays checking machines for vulnerability. We simply lacked an overview. With eBPF, we have better visibility and can implement mitigating controls to prevent disasters, securing future holiday seasons.

Imagine thousands of small, interconnected baby-firewalls within a large eBPF domain. All your workloads, and eventually your network devices, will participate in this domain. With this kernel-level insight, we can create firewalls based not just on IP addresses and port numbers but also on processes, libraries, and resource usage.

We can define rules allowing a process on a workload in one part of the world to communicate with a process on a workload in another. There might be no need to send IP packets over the network if the receiving process isn’t running!

“But that’s just for our workloads; what about network devices?” – you might ask – and thank you for that!

DPUs (Data Processing Units – SmartNICs)

Cisco has a strategic partnership with Nvidia, a company you probably know for its high-performance graphics cards for gaming PCs and exceptional GPUs (graphics processing units). Nvidia also produces DPUs (Data Processing Units – SmartNICs) with the same high performance, serving as the network card’s equivalent to GPUs. DPUs are already present in several Cisco devices, and moving forward, they will likely become the norm rather than the exception.

Besides being incredibly fast, DPUs have a feature that plays a significant role in Cisco’s announcements: Dual-Path Technology. This is a result of thoughtful innovation.

The technology essentially sends all data to two data planes. One is for production, while the other simply drops the packets. However, we continuously measure what the outcome would have been for the packets in our “test path.” This allows us to test new rules, apply patches, and even upgrade our firewall “on the fly.” If everything looks good, we switch to the other data plane and call it production. An integrated test environment – what’s not to like? I can’t count the number of times I’ve sat in a dark basement, wishing all applications would still run after a firewall upgrade.

Oh, did I mention that these DPUs are designed to participate in your new eBPF domain? Voilà – firewall anywhere!

Artificial Intelligence

Yes, I know some readers might be rolling their eyes and thinking of closing their browsers, but hold on a moment … Yes, ChatGPT can make mistakes, and I understand the need to respect this new technological revolution, but honestly, I’m not as afraid of AI as I am of RHS (Real Human Stupidity). Besides, AI is already part of our daily lives – like it or not!

If you’ve read this far, chances are you own a smartphone. If so, it’s probably been a while since you updated the physical fold-out map in your car. You might never have planned a road trip with paper maps on the dining table. Most of us rely on AI: CarPlay, Maps, Tuscany – Go!

When all passengers are focused on a promised ice cream, the next restroom stop, or are fast asleep before the car leaves the driveway – I gladly accept all the help technology can offer.

AI shines here – with numerous data points from other drivers, their speeds, road layouts, and external “intelligence” like accident reports, construction updates, and even weather conditions. Combined with historical “big data” and advanced algorithms, AI guides us around traffic jams and accidents, ensuring we reach Italy each year.

Even if I had access to all this data, I could never process it manually – especially not in real-time. I’m a huge fan!

In the old days, we manually gathered information on applications, port numbers, threats, and vulnerabilities to shape our security setups and firewall rules. It was a slow, manual process. Now, thanks to eBPF, workloads, big data, and intelligence, we have a wealth of information available. In fact, we have so much information that it would be impossible for a firewall administrator to manage and interpret it all. That’s why we turn to AI… and as a wise person once said: “AI is what we call it when we don’t quite trust it yet – after that, we just call it automation.”

In the future, we will use AI to define our rule sets. Initially, we might need to approve them, but with all our information, these rules can be much more granular and dynamic. Imagine a process on a machine that communicates with another machine’s process once a month, such as payroll processing. Both machines are part of your security domain, and with eBPF, we know both processes are running. We also know when the communication should occur and what data is involved. The system can thus create a firewall rule on both machines and all network devices in between, allowing this specific communication with the respective data. The rule could even be time-specific, activating only once a month with the current IP addresses. All other machines, processes, IP addresses, and data wouldn’t meet these criteria, and their communication wouldn’t be allowed. If one process isn’t running, there’s no need to open all our firewalls.

I know this is a simplified example, and you’re right to question it, but the point is we have a vast amount of information and telemetry that we can intelligently use to build context-specific rules.

eBPF, DPUs in Network Equipment, and AI

Indeed, these are three key components: eBPF, DPUs in network equipment, and AI. Together, they form the IT architecture’s equivalent of a “Kinder egg,” providing the essential building blocks to address future (and current) firewall tasks and more:

Segmentation

Segmentation is the core task that firewalls are designed to address. We’re talking about more than just stateful firewalls; this includes NG features and integration into the core of our workloads. Rules can be based on processes, libraries, and resource usage. Combined with the distributed architecture, this allows us to implement firewalls in many more locations, fitting much better into the new, more distributed world. It also holds the promise of achieving our micro-segmentation goals within our data center.

Administration and Maintenance

With the help of AI and Dual-DataPath, we can upgrade functions, patches, and rules while remaining operational. We don’t have to wait for “maintenance windows” and hope that applications still work afterward – IT’S TESTED!

Vulnerability Management

With real-time intelligence and insight into workload configurations, effective vulnerability management is possible – significantly reducing the time to mitigate.

Rule Generation and Maintenance

“Up-to-date rules” that are tested and ready for approval – and once we build trust in the system, we call it automation.

Together, these three components – eBPF, DPUs, and AI – create a robust and adaptive framework for managing and securing modern IT infrastructures.

OK – 2 of them, please!

Hold your horses. Cisco has committed to a journey to take us to a better place. The company has acquired technology firms to lay the tracks ahead of this forward-moving train.

Isovalent

Isovalent is to the cloud and especially to Kubernetes, what Cisco is to the “old network world,” and they have only four keys on their keyboard: e-B-P-F. Therefore, they are not unfamiliar to Kubernetes users, where they hold an absolutely dominant position. But the strategy extends beyond this. DPUs are only the first step – tomorrow, every cloud VM, serverless function, DB, etc. will be included! This allows you to bring these functions into your own eBPF domain with your own rules.

Splunk

Many of us know Splunk from the SIEM world, and rumors suggest they are developing advanced AI. Remember, the company has been built on ML (Machine Learning) and AI from the beginning.

Cisco is able to undertake this task through significant collaboration with Nvidia and several other acquisitions and partnerships. It will be a journey, and the first stop will likely come sooner rather than later. Rumors suggest an eBPF Agent will hit the market this summer, and after that, the train will move on to the next station.

However, you shouldn’t expect a fully operational system during this year’s travel season. You must continue to protect your systems, update your firewalls, and stay informed about threats and vulnerabilities as usual. It’s also crucial not to delay new security investments, as it’s too risky. Cisco aims for a smooth transition, both technically and commercially.

But… start getting used to the idea. Learn about AI, use it, and keep up with its development. Read up on eBPF and explore the possibilities. Ask your suppliers about it and consider whether your organization and processes can embrace this new technology. The train is coming – there’s no doubt about it. The only question is, when will you get on board?

Contact
Contact us now