There’s no shortage of artificial intelligence applications in production at AT&T. AI models optimize field service dispatch to maximize service efficiency and minimize miles. They enhance network planning and operations, and they head off fraudulent orders that could prove costly for the US$122.3 billion telecom company. Its end-to-end incident management solution that handles everything from preventative maintenance to automated customer notifications is AI-driven. Its gen AI platform “Ask AT&T” has 100,000 users turning to it for everything from coding help to deep research, generating about 5 billion tokens a day (the equivalent of 5 billion words).
AI is also behind AT&T Active Armor, which blocks or labels more than 2 billion robocalls per month for customers. Machine learning algorithms scour network administration logs and perform real-time behavior-based analytics to detect and respond to cybersecurity threats.
The growing roster of AI use cases is not without attendant risks. No one knows that better than Rich Baich, senior vice president and chief information security officer at AT&T, who likes to say, “Who’s watching the AI that’s watching the AI to ensure the AI is doing what the AI is supposed to be doing?”1
Having spent more than 20 years in cybersecurity, Baich is keenly aware that AI’s superpowers could be harnessed for malevolent aims. He has considered a nightmare scenario: What if, in the future, bad actors combine AI and quantum computing? “Talk about unpredictability,” he says, “And unknowns.”
Yet, Baich shows little fear when he talks about mitigating the risks inherent in the ever-expanding enterprise application of AI. Why? He’s seen this movie before. “What we’re experiencing today is no different than what we've experienced in the past,” Baich says. “The only difference with AI is speed and impact.”
There are established playbooks to apply and emerging standards to adopt. “It’s not just about having a governance committee,” Baich explains. “What we’re doing is taking 30 years of information security practitioner work and applying it to this new AI age.”
At the core of the risk mitigation strategy is a tightly governed software development life cycle (SDLC) approach to AI. This structured process helps ensure software is designed, built, and maintained securely from inception. “The biggest thing that separates smart organizations is that SDLC mentality,” Baich says. No AI gets a pass at AT&T. Vendor AI? Internally developed model? Each should be tested, red-teamed, meet architectural requirements, and have access controls in place. To accomplish that, Baich’s team has to “give developers and the business the tools and knowledge to securely bring forth the AI that they need, validating it before it goes from pre-production into a production environment—just like anything else.”
One of those tools is a security-specific generative AI platform that has ingested AT&T’s security policies, standards, and guidance to become a practical query tool. If an employee needs to secure an internet-facing AI application, all they have to do is ask. Complementing this, AT&T’s AI Security Center of Excellence (CoE) helps employees understand what AI is, what the risks are, and how to use or implement it securely.
Baich is also leading the charge to meet ISO 42001, the first international standard to help organizations manage AI systems responsibly throughout their life cycle. “Everyone should strive to get there,” he says. “It doesn’t necessarily mean you’re 100% secure from an AI standpoint, but you've demonstrated that you have all the ingredients [in place].”
Everyone in the business needs to be on board with AI security, which demands education, internal marketing, communication, and repetition. “Just like traditional security awareness programs [telling users], ‘Don’t click on that URL.’ You really have to hit the workforce hard with this information.”
Despite such efforts, shadow AI—where employees use certain AI tools and systems without the organization’s formal approval—can be an issue at any company. Vendors sell business leaders on AI agents to increase efficiency. “The potential is that they drop it in, and we don’t know about it,” says Baich. “Then we find out about it, and we have to bring it back through the SDLC.”
The goal is not restriction, though; it’s empowerment. That’s also why AT&T gives nearly all its employees access to the “Ask AT&T” tool.
“It’s a cultural journey, making sure our employees are knowledgeable,” Baich says. “We need all facets of the organization to understand AI and move it through the CoE.”
Baich has developed mutual understanding with AT&T’s business leaders to strike a balance between moving quickly to adopt AI that can deliver business value and competitive advantage, and protecting the organization and its data. “They understand that their job is enablement and my job is risk,” Baich says. “Do they want to run faster? Yes. Do I have speed obstacles in their way? Yes. Are those good? Yes.”
“It’s not about running scared,” Baich says. “It’s about enablement, understanding the risks associated with AI, and designing processes and implementing technologies that allow us to operate within our risk appetite.”