This is the first in an occasional series on AI and ML tools, technologies, and implications. I’m not an AI/ML expert, but here are some practical issues and other important ideas to think about as these tools come into the mainstream.

FIRST: PRACTICAL CONSIDERATIONS

Don’t Wait

For my SE/SA/Architect/CTO colleagues, you can’t ignore this or be a late adopter. These are tools that you need to learn about and and know how to use, along with their advantages and disadvantages. Your customers will be asking about the tech in general. They’ll also have specific questions about AI tools and elements that are explicitly being integrated into the products you sell, and/or integrations with other AI tools.

System Configuration

One particular implication is that AI will fundamentally impact device and system configuration and scripting, becoming another tool that will help automate operations. This will make it increasingly easier to set up demos, POC activities, configure competitor systems and equipment that you don’t have expertise on, and more.

If you’re someone who does device and system configuration, deployment, and operations for a living (e.g., network engineers), this will impact you. It may make hands-on config-oriented certifications less important, and someday irrelevant. You really need to learn how to leverage AI tools for these activities NOW. Don’t wait for your employer to “provide an opportunity” or force you to; start learning about this NOW so you can be ready before changes come. The advantage: this could make you significantly more productive, help you get things done more easily, and provide you with additional time to learn more important and interesting things. You’ll need AI/ML experience for the next phases of your career.

There are obviously similar impacts here for software development. I’ll leave that discussion to experts in that field. But if you have more to add on this or anything else here, please let me know via Contact.

Become an AI Whisperer

Learning how to “talk to the AIs” is rapidly becoming a skill set. Activities like “Prompt Engineering” and “Prompt Priming” are emerging that are all about setting context and getting useful results from an AI tool. If you remember the early days of Google, there was a talent for “Google Hacking”. This was all about constructing your search in such a way that you got one result – the RIGHT result. AI Whispering is essentially Google Hacking on steroids. There is one useful brief video on these concepts here – and I’m sure there are many others.

Confidence in Results

There are issues with confidence in the results from AI tools. In oversimplified terms, Large Language Models (LLMs) provide probabilistic results, presenting highly likely correct responses based on the training data they’ve been provided. You will eventually find results that are incorrect; the AI will make stuff up (“hallucinate”) and present the results with complete confidence. YMMV – watch out for these issues, proceed with caution. You may see better results from different tools, which could be (in part) better training data being presented to one tool over another.

One Thing it’s Definitely Good For

One thing AI is good for is requests along the lines “give me a list of things to consider when doing xyz”. You then take those results for further research, validation, and expansion. This combination of compute-enabled tools with human oversight is a powerful combination – which should not be a surprise. See more on this below on “Freestyle AI”.

New Security Issues

There are issues that will be introduced by AI enabling bad behavior and creating new and enhancing existing exploits. AI will also enable us to create better defenses. Additionally, there is a new set of security challenges around protecting the models used in AI. Imagine the ability to “poison” an AI model with intentionally deceptive data, training the model in a bad way. For example, in an application that is supposed to be identifying tanks, making a tank look like a meerkat would be a bad thing. The 2023 RSA conference showcased many companies focusing on security issues around AI/ML, including Hidden Layer winning this year’s Most Innovative Startup award. This is just the beginning of some very interesting work.

New Capabilities for Network Automation

It’s clear that there are many new capabilities that will be enabled and enhanced by AI and ML. One particular area of interest to me is the use of AI for network and cloud operations and automation. We’ve been talking about Network Automation for a long time (for me, at least 15 years), and it just hasn’t taken off like one would expect. I see AI tools enhancing functionality here and providing a potential inflection point for the adoption of network automation at greater scale.

No doubt that there is already interesting work and innovation going on here: see Selector, Kentik, Itenial, Gluware, NetBox, Network to Code, and many others. New tools that could (a) learn new telemetry formats (standard or proprietary), (b) implement operator intent in much simpler and more effective ways, and/or (c) impact the economics of acquisition and operation all have significant potential to accelerate network automation speed of adoption. I’ll definitely be spending more time on this area and look forward to significant collaboration with others in this area.

The One Constant

This will all change. We are in a time of significant development and evolution for AI and ML. New capabilities, integrations, and application areas will continue to emerge rapidly (but be sure to see the caveat below on The Other Constant). There are good books out there on theory (see one recommendation below), but keeping up with changes can be a weekly or even daily exercise. Podcasts, videos, articles, and blog posts seem to be the best way to keeping up to date for now. Getting access to and experimenting with these tools is also a necessary path to understanding how they work, keeping an eye out for limitations and results that just don’t look right.

The Other Constant

Don’t forget the hype. Everyone will be – already is – talking about how their products and services either use AI/ML or enhance AI/ML performance. Some of this is bound to be true. Pay attention, be discerning, and see what is useful over time.

SOME HEADIER STUFF

Freestyle AI

Economist and GMU Professor Tyler Cowen in his 2014 book Average is Over talks about humans versus computers in chess competition, leading to the emergence of “Freestyle Chess”, where humans using chess programs can compete against other humans with chess programs to play the best chess game possible. There is a real divide-and-conquer here that allows humans to do what they do best and machines to do what they do best. I see “Freestyle AI” as a likely outcome here, where AI tools leveraged by human users provide the best results and outcomes. To read more on this, visit Cowen’s blog Marginal Revolution.

The Naming of Things

I wonder if we have made a mistake by using the terms “Intelligence” and “Learning” in describing these technologies. Practically, it’s too late to reverse this. But I think the terms create the impression that these systems are intelligent and that they learn just like humans do. I don’t think that’s the case. “Artificial” and “Machine”, respectively, are necessary modifiers here.

The Call to Pause

There are prominent voices calling for a pause on AI/ML development. I am not sure what this will accomplish; bad actors will ignore any regulatory approach (or not be subject to it) and will continue to make progress. We also need to watch potential motives for asking for a pause; some may call for a pause to prevent others from catching up to existing tools. There is always the possibility of hidden or mixed motives in situations like this.

Don’t Get Fooled Again

The call for pause does resonate with me in one particular area: we don’t want a repeat of the mistakes we have seen with Social Media. We seem to learn more daily about the negative impact of social media on mental health, physical health, attention span, and more. It doesn’t mean there is a straight connecting line connecting Social Media to AI/ML. But with this fresh in our mind, and the negative impact staring many of us right in the face, let’s be prudent and proceed with appropriate caution. We should be studying and anticipating the negatives of AI/ML and related technologies as best we can.

The Three Laws

The legendary writer and biochemistry professor Issac Asimov gave us a great starting point for putting limits on AI in his Three Laws of Robotics

  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Yes, these were presented in a fictional context, but why aren’t we talking more about using Asimov’s Three Laws as a fundamental starting point for AI/ML tools? I’m sure there are conversations happening on this somewhere.

 

FOR FURTHER CONTEMPLATION

I greatly appreciate Javier Antich’s excellent book Machine Learning for Network and Cloud Engineers: Get ready for the next Era of Network Automation. It was an excellent starting point for me with a background in network engineering. It also made me feel better in that all the statistics and mathematical optimization techniques I learned 30+ years ago still matter in AI and ML.

I highly recommend several recent (May 2023) Podcasts on AI:

As always, citing authors and podcast creators does not imply that you agree with everything each creator says. I do hope you find them useful.