The sky’s the limit with the two sides of AI and networking | Compu…

admin
9 Min Read

The big change for artificial intelligence (AI) and networking was noted at the HPE Discover 2024 conference by Jensen Huang, founder and CEO of NVIDIA – a company with a pivotal role in the AI tech ecosystem.

During his keynote, Huang observed that the era of generative AI (GenAI) was here and that enterprises had to engage with “the single most consequential technology in history”. He told the audience that what is now happening in the industry is the greatest fundamental computing platform transformation in 60 years, encompassing general purpose computing to accelerated computing, from processing on CPUs plus GPUs.

“Every company is going to be an intelligence manufacturer. Every company is built fundamentally on domain-specific intelligence. For the very first time, we can now digitise that intelligence and turn it into our AI – the corporate AI,” he said.

“AI is a lifecycle that lives forever. What we are looking to do in all of our companies is to turn our corporate intelligence into digital intelligence. Once we do that, we connect our data and our AI flywheel so that we collect more data, harvest more insight, and create better intelligence. This allows us to provide better services or to be more productive, run faster, be more efficient, and to do things at a larger scale.”

While the general potential ramifications of the partnership between parent company Hewlett Packard Enterprise (HPE) and NVIDIA are unknown right now, David Hughes, chief product officer of HPE Aruba Networking (HPE’s security and networking subsidiary), said that there are more pressing issues about the usage of AI in enterprise networks – in particular, around harnessing the benefits that GenAI can offer in the world of CPUs plus GPUs. Hughes believes that the deployment of AI in its industry has two sides – one is AI for networking, and the other is networking for AI.

He said that there are subtle but fundamental differences between these two sides: “Networking for AI is about building out first and foremost the kind of switching infrastructure that’s needed to interconnect these GPU clusters. And then a little bit beyond that, thinking about the impact of collecting telemetry on a network and the changes in the way that people might want to build out their network. So, that’s all networking with AI.

“The other area, AI for networking, is one where we spend time from an engineering and data science point of view. It’s really about [questioning] how we use AI technology, to turn IT admins into super admins so that they can handle their escalating workloads independent of GenAI, which is kind of a load on top of everything else, such as escalating cyber threats and concerns about privacy. The business is asking IT to do new things, deploy new apps all the time, but they’re [asking this of] the same number of people.”

Hughes believes it is important to demonstrate to these hard-pressed IT admins how to take best advantage of automation and AI to take more off their plate so that they can scale. Hughes revealed that his company has a team of a few thousand data scientists working on how to leverage AI more broadly, including classification AI and GenAI, into the company’s products, particularly the Aruba Central cloud-based management system.

After making the distinctions between the categories of AI for networking and networking for AI, Hughes said the main job will be to take this technology to those who will be using it. The challenge will be to articulate what AI and networking means in the job for those running the networks at HPE Aruba customers such as at Espai Barça, said to be is the “largest and most innovative” sports and entertainment space in a European city; the Tottenham Hotspur football stadium; the AT&T Arena of the Dallas Cowboys; and the Mercedes-AMG Petronas Formula One team.

“The key [for users] is how it transforms their jobs,” said Hughes. “For us, that’s about explaining the change rather than just saying, ‘Here’s some more tech this year.’ Our main job is taking this technology to make [operations] more efficient. So, instead of users having to figure out how to use AI to make lives better, we’re going to do that for them.

“There are obviously some domain-specific things that they need to take care of, but in terms of building a network that largely runs itself, we should be doing that. So, that’s really where we are investing, taking a very inspirational high level [of technology] down to the absolute nuts and bolts.”

For HPE Aruba customers such as BMW and General Electrics, which are moving into the realms of AI-based digital twins to support their advanced engineering environments, getting down to the nuts and bolts is an almost literal requirement.

The granularity of the HPE Aruba AI offer extends to recommendations for not only individual customers, but also individual sites and even individual access points, with the latter offering direction on the best firmware to run. Hughes explained why this is important to customers in a typical wireless network deployment.

“If you ask someone, they say the thing that is most successful is based on their personal experience,” said Hughes. “When we have AI, it is looking at all different factors about that particular access point [AP] – the size of the venue, what is the people density, the types of things they’re doing with it, the types of end systems they’ve got. Maybe it’s a place where everyone’s got iPhones or some other type of phone.

“AI realises that you’ve got those kinds of end points in the mix, and then perhaps these particular releases of our AP don’t work so well as those other ones [because of a bug, for example], it will take all of that into account to recommend … [the hardware] for a site, the framework, the APs, and so on.”

Key to this is that the intelligence and output delivered is based on the data collected and processed through the Aruba Central system. This is currently managing data from roughly four million devices – meaning access points and switches – representing about a billion plus actual telemetry endpoints such as phones and laptops collected into one big data lake, which is used to train the language model for the AI offer, said Hughes.

“It’s way better than any admin can do based on their individual experience just for that firmware example, and it’s multiplied about 100 times with all other kinds of recommendations. For many of these recommendations, we suggest to the admin, ‘Do this’, but there’s a checkbox saying, ‘Yes, I want to do it.’ And if you see something similar to this in future, do it automatically.

“And so that turns into closed loop AI automation. And this has been a major push for us. I believe that we’re really only at the beginning, because there’s a lot more we can be doing. We should be getting to the point where things really are running themselves and the triaging is completely automatic. We’re making really good progress, but the sky’s the limit.”

Share This Article
By admin
test bio
Please login to use this feature.