GSA Does That!?

Power of AI

U.S. General Services Administration Season 3 Episode 15

Send us a text

In this episode of GSA Does That!?, we’re joined by Zach Whitman, GSA’s Chief AI Officer and Data Scientist, to explore the transformative role of artificial intelligence in government operations. Zach shares his journey into government service, highlights key initiatives like AI governance and policy frameworks. Plus, Zach discusses how AI is reshaping public services, workforce dynamics, and data infrastructure. Zach also touches on the ethical considerations of AI, GSA’s dedicated efforts to responsibly adopt AI, and the future potential of AI in government. Looking ahead, the conversation turns to how GSA is improving data infrastructure to enable better decision-making and empower the workforce through accessible and understandable data.

Want to know more?

Do you want to know about how GSA is shaping the world of government AI use? Check out the resources below.

"GSA Does That!?" is the U.S. General Services Administration's first agency-wide podcast, offering listeners an inside look into how GSA and its partners benefit the American people. Hosted by Rob Trubia, the podcast features interviews with GSA leaders, experts, partners, and customers, covering topics such as federal real estate, acquisitions, and technology. The title reflects many's surprise at the scope of GSA's impact. At the same time, the artwork pays homage to President Harry S. Truman, who established GSA in 1949 to improve government efficiency and save taxpayer money. Whether you're a policy wonk or just curious about government operations, you can join the listener community.

For more information about the show visit, gsa.gov/podcast.



Max Stempora
Welcome back to another episode of GSA Does That!? the podcast that uncovers the stories behind the federal agency delivering effective and efficient government. I'm executive producer Max Stempora, filling in for our normal host, Rob Trubia. In this episode, we're diving into the world of artificial intelligence. With AI appearing across the web, applications, and even smart devices - these new tools are primed to be a game changer. As we approach the one year mark on the executive order on safe, secure, and trustworthy AI,

come with me as we discover what GSA has been up to on this front. Joining me today is GSA's chief AI Officer and Data Scientist Zach Wittman. Zach is relatively new to GSA, but has a history of digging into data and getting the most out of it. We're going to dive into the benefits, risks, and challenges that AI brings to the workforce.Before we get going, remember that this podcast is available on all major platforms, so please be sure to follow, like and leave us a review. For more information about this episode and others,

visit us at gsa.gov/podcast. All right, with the housekeeping done. Let's get into it. Zach, welcome to the show. I know I've been looking forward to this one, and I can't wait for our listeners to learn a little bit more about you and what you've been working on. Reading your bio, it looks like you've worked across government in private sector.

You have an interesting educational background from what I've read. You have a doctorate in hazard and disaster management. That is an absolutely interesting degree and it makes me curious. How did you end up at GSA doing what you're doing? Can you tell our listeners just a little bit about your journey and how you got started?

Zach Whitman
Yeah, yeah. And, sometimes I have to remind myself how I got here, so I don't think it's unfair, for us to walk through that. Yeah. So hazard and disaster management, my background was originally in the physical sciences and how they interact with with people. The idea being, if an earthquake hits, you know, what is the socio economic impact of that earthquake?

Does it affect communities? And what can we do to prepare for them better? So with that, it was a lot of like geospatial statistics, things like forecasting models and event prediction that took me into the commercial sector. First and foremost, you know, was, brought on to work with Deloitte as part of their advanced analytics practice.

And we largely focused on problem sets that were spatial, really significant in how things happen in space and time. And, we would I would move around a lot within the federal workforce, going into agency to agency. We were a small practice that would do very focused problems, and then we'd be moved over. So the good part about that was I got a lot of breadths, of different types of projects.

But the bad was there were very short engagements. We never could stay for very long after we did the analysis. And I was always hungry to see things through. Like we would come in and we would, you know, try to figure out a problem. And then, you know, a few months later we'd have to leave and we wouldn't get to see, you know, how that actually turned out.

And so I was always looking for longer engagements whenever I could. And from there, I, ended up bumping into the Census Bureau a fair bit. Being it’s spatial and having a strong bet on geography, we would ultimately work together. My mission had take me one step back as well. I think it's important, in a lot of the projects, be it public sector, commercial, what we found was that the answer is that we were able to provide we're oftentimes limited by the availability of the data and the quality of that data.

And a primary source for all of these data were coming from governmental agencies, primarily. If you want to figure out where best to optimize, they optimize a, like a cell tower network. Usually you'd have to rely on, state or municipal data sets to figure out what kind of building infrastructure, you know, you attach to these, to the these, to these things.

And that led me to see examples where certain investments in communities would not be made because the data weren't available. We simply could not get enough information to make a good prediction. And, that really lit a fire under me to say like, well, what can we do to to improve the availability of data? These public resources have such importance.

And how can we make sure that people like myself, but also for other industries and agencies can use them? And so that led me into working with the Census Bureau. We were, working on a project where, the Census Bureau was redesigning the ways in which they disseminate their information. And the Census Bureau sits within every - 

It has data on people, households and the economy. It touches every part of our civic life. And this provided just, it was an opportunity that was too good to pass up. Where how can I make a dent on this pet peeve of mine, which is we have such great information. It's just not reaching. And so that's what led me into government service.

I spent seven years there working on their dissemination team, and then from there I transitioned over to GSA with the idea that what can we do to take some of the lessons learned from the Census Bureau statistical agency, bring them on board to the GSA? And it's just been a great move for me. I feel so empowered.

This team is incredible to work with. GSA is such a vibrant organization with so many different angles to to apply these types of problems, which is just fodder for any kind of data scientist. And so, yeah, it's a long winded story, but basically it's just I was doing a lot of research with public data, found myself being handicapped by the lack of availability of it, trying to fix the problem from the inside.

And now I'm stuck. Because this mission is to to exciting to to leave.

Max Stempora
It certainly sounds like you ended up in the right place here at GSA. You know, we've had a lot of different people on the podcast from both inside GSA and outside. But I have to say, I think you have one of the coolest titles of any of the guests we've had. You are the chief AI officer as well as chief data scientist for GSA.

What does that even mean? And what does your day to day look like?

Zach Whitman
You know, it's funny, when I came on board, when I was coming on board, the role of chief AI officer didn't exist. It was, responsible AI official when I was coming in. That was the function. And it was really, about, making sure that we are managing our AI adoption responsibly, as the title really kind of professes.

And, and that's an important thing to consider because AI, especially nowadays, has become very much the capabilities are, are are such that it's relatively easy, to, to, to do a lot of things. And these capabilities are, are powerful and profound. They have a broad base of application. It's not just in the data science world anymore. It's it's exploded out to every corner of your life, it seems like.

And now we're reaching a point where it's much more about the editing, the self editing of how we apply these technologies, and less about the capabilities of these technologies themselves. I saw a story the other day of a crew taking, these, these  smart glasses, and they, they chained a bunch of AI tools together so that these glasses could recognize people's faces and then identify who t hat person was, and then do a web crawl to gather more information about them and feed that back to the wearer of  the glasses.

I think that there is an editing question. There is. That's something that we want rather, could we do it? And so things like that I think are really critical, especially in government, where we have a responsibility to use these technologies in a trustworthy and open way. The, the relationship, the AI officer role and the data scientist role, I think is a natural one because AI's nothing without our data and our data practices.

AI is not new. It's been a big function of any kind of data science tool chain for years now. And so, the idea that we had going into this was, well, how can we marry the critical infrastructure of our data with the enabling technology of AI and build a sustainable infrastructure for GSA to really benefit from the capabilities, and build a strong foundation from which we can work.

So our thinking was to to keep these things as closely as possible, as close together as possible. And, you know, I think our day to days are, are largely driven by how can we enable the organization to adopt these technologies safely? What steps do we need to put into place from maybe an IT perspective, be that the scalable compute required or to use this stuff, to the data and the data products that need to be used to drive them into creating more value and then ultimately into the, into the use, for data scientists or other practitioners, how do we build platforms that folks feel empowered to be creative and solve problems, without having to fight technology or the data sets that really feed their work?

Max Stempora
Well, I think the story you were telling there sort of flows naturally into this next question. Now, it's been about a year since the president released the, safe, secure and trustworthy AI executive order. And in the story you were telling about AI glasses, it seems like to fall right in line. Like, yes, we can do it, but should we be doing it?

So the question really is, what has GSA been working on this past year? What have we been doing?

Zach Whitman
Yeah. And so the things that we've been focusing on, is setting strong policy groundwork, so that our adoption of AI is effective, sustainable and trustworthy. We know that these capabilities are there. It's a question of how we can convey that our use of it is as trustworthy as the EO mandates, if not exceeding those standards.

And I think one of the things that I'm really happy about is that we have that leadership coming through and saying that these are the core principles that must be enforced, across our federal complex to convey our maturity in using this technology to, demonstrate its efficacy, to, provide recourse. Should these, systems not behave in a way in the ways in which they should and mandating that we describe how we do that?

The, the key actions that we took following the release of the executive order was to work closely with OMB, to identify what are the specific actions that need to be, established for our, reporting of our use, of our risk mitigation factors that we're going to be applying to our use cases, how we report our use cases.

And then, once the,M- 2410, instructional came out, it was all about us establishing the governance bodies and the processes required to do that in an operationalized setting. Especially at scale, is the scale of our GSA being, so broad, especially in the mission centers? It's really critical that we have a governance structure that can handle that variety.

What we did was we established an AI governance board, that's an executive level board that is designed to oversee the strategic aim of GSA in terms of how we want to adopt AI. What is our risk tolerance and profile for these types of use cases? And then ultimately identify where we should target and prioritize some AI adoption.

Below that, we have a supporting team called the AI Safety Team. And this team is designed to adjudicate and use cases as they come in. We wanted to make sure that this was a grassroots effort, where folks who identified workflows that could be enabled by an AI technology had a forum to to propose that use case as a viable, alternative to our current process.

The AI safety team takes in these requests and, assesses them for their risk profile, make and, and make sure that they are representative of the broad equities that GSA reflects. And so in both of these, these governing bodies, we had, a very strategic aim of making sure that we had broad representation.

It was not a technology board, one that would adjudicate whether or not the the tool  could do the thing. But really, should it do the thing? And, that's why we brought in a whole variety of, folks from, the, you know, we have CIO representation, but we have FAS representation, business line representation, PBS.

CFO was involved. We have, ethics involved. OGC is involved. We wanted to make sure that we had, all eyes, thinking about this from as diverse a perspective as possible to make sure that we're not missing stuff. Because, again, it goes back to the question of how is this the right adoption of this technology?

Because we know it can, should it.

Max Stempora
Well, it sounds like GSA is putting in a lot of work trying to sort out the boundaries and set a framework for where we're going to do this. And I think that's a great step, right? It's enabling us to use these tools responsibly. But GSA was founded to create a more efficient government and to serve it as a whole.

How do you see our work impacting all of government, right? What are we doing that's going to enable the rest of government to use these tools?

Zach Whitman
Yeah. This is something that I think we've been doing really well. And, and this is a partnership. One of the things that I want to stress is that, the, the function of the chief AI officer is really not representative of the amount of capability or, expertise we have in AI. And that's been demonstrated by some of the outputs that have been, that we've been releasing over the course of the year.

We were, we released our FAS, ITC procurement guidance for GenAI, which was incredibly well received document which helps procurement professionals understand how we are to approach procuring generative AI technology since it does have, nuance that needs to be addressed. In fact, that was also bolstered and supported by the release of M-2418, which only further suggests the good work that, FAS ITC did in developing that procurement guidance.

We also have been leading the charge in the, AI, hiring search. Ann Lewis and TTS have been absolutely instrumental in, in driving more AI talent into the federal workforce and be that through, the, the hiring, our shared hiring authorities that have been put forward, along with, the PIF program, all the, the digital corps efforts, we've seen a huge influx in the amount of applicants that have an AI specialization that want to work for government.

And and this is really an important thing to shout out. And folks want to work on really mission critical, interesting problems. We, the federal government, have those problems, and we need to make sure that we provide a good work environment for folks to to dig into those problems as, as much as possible. I think we're competitive in that space.

Our mission is, is so interesting and they work in the problem sets that can be worked through. These technologies are incredible. And I would only just, you know, shout out to the audience and make sure you take a look at some of those, hiring authorities. And, if you're, you know, thinking about working for government, check them out.

You're hiring for government check now, because, again, the more we can get more of this talent in here to collaboratively work these, really sticky and interesting problems, the better we're all going to be served. So huge effort there. And and also, you're once you joined the federal government, you have support. One of the things we have is, the AI community of excellence, which has communities of practice where you can swap war stories, talk about what you're doing, what you can learn from others.

There is a very clear AI tribe that is building up from, this organic community of AI specialists where you can, collaborate with other folks, not just within your organization, but across the whole federal complex, which is, again, vast and has incredible problem sets to be, to be explored.

Max Stempora
I love that you mentioned the the sort of workforce and hiring. It seems like every episode we have somebody on who's talking about people love doing impactful work. Government is here, right? That's one of the things about the work we do. So it's like every episode somebody is on here talking about that very thing. And I think for me, one of the coolest things about GSA, talking about hiring and bringing in help is that we're always having events where we're bringing in people looking for outside expertise.

I seem to remember just a few months ago, we did something called an AI hackathon. Can you maybe tell us a little bit about that? Why did we do it? What did we get out of it and break it down for our audience a little bit?

Zach Whitman
We wanted to explore what the future of our digital services would look like. In a world where folks tend to interface with AI tools a little bit more directly than they do today. Think about like, a premise we could say would be if products like chat, GPT or Google Gemini, anthropic, claudes anthropic, anthropics claude, my apologies, become a first stop where a lot of folks were.

I want to ask a question. You know, I want to get some more information. I might want to, take an action. If folks start to go to those tools a little bit more directly. They may not have a need to leave those tools. They may not have to say it's not like a traditional search engine where you ask a question and you get a bunch of links in that link, takes you to a place where information could be then understood. These LLMs are delivering the information in a consumable way, and then provide you follow up, and then the ability to take action.

And so we were thinking like, what would the world look like if folks started to use these tools more directly? And importantly, how did the federal government need to respond to this environmental shift? We all had and a huge environmental shift. When Google broke onto the scene and became the norm, the adoption. People designed their websites differently in response to the algorithm.

This the moment we're in right now, to me, is similar. We have this potential functional generational shift where people's workflows fundamentally change because of the tooling available and the content that's generated, it changes in response to the tooling available. And so we threw down this hackathon because we wanted to explore this concept. What would others coming from outside the government.

Think about how a federal website needs to look, act, behave, do, in a world where people use ChatGPT as their first stop. Does the website need to look differently? Do we need to write it in a different way? Do we need to create new APIs that can interact directly with these LLMs? How do those API need to behave?

There's a whole host of things we could do, and I think the idea of a hackathon gave us this flexibility and this open mindedness for us to be on the receiving end of these good ideas.

Max Stempora
Before we keep going. You used an acronym during that answer that I want to make sure our audience understands, what is LLM? What is it? What does that mean for people who don't follow tech and know what AI is?

Zach Whitman
Yeah, sorry, I should have caught myself a large language model. Is, it's a it's a it's a computational model that generates, an output like text. It can translate content, content. It can, perform the tasks like natural language processing, sentiment analysis, and really what they are, these are really powerful, models that, allow for folks to have a conversation with, a very large data set that lives in the background somewhere.

Think about, if you were able to train the, a model to emulate the speech and text that it found on the internet, that is effectively what these LLMs have become is very, very large, very comprehensive means with which it can replicate human speech patterns. It can replicate, a variety of different tasks.

Max Stempora
Okay. I think that helps. And I want to go back now, when you were talking about the AI hackathon and how it tried to sort of reimagine how we would do this, right. And I think throughout history, you can pinpoint certain advancements that were what you would call disruptors. And that sort of seems like the moment we're at right now.

So with telephones, they replaced telegrams, online shopping destroyed traditional retail, where even you can think about ridesharing apps and what they did to the taxi industry. Right. So I guess as you think about how things are changing and what AI's doing, the primary concern from a people perspective is what does this mean for me? What are these tools mean for the workforce, or is it just a matter of learning a new tool to do your job better and more efficiently?

Zach Whitman
Yeah, very much the latter. I think I think it's at a superficial level sometimes it's the messaging gets a little muddled and, and, and there might be maybe too much fervor in the idea of how capable a lot of these tools are in the sense that it can do entire things, processes from end to end. And it's basically hands off the wheel.

We're not in that space. And there it's important for folks to, to, to one try out these tools, where you can get some practical experience with them. Because these are functional tools that enable, you to be more efficient and productive. And really, that's the framing that we're taking here. If you look at the literature, that's where the benefit is, is that it helps you get going, it helps you move faster.

It helps you be more efficient. It helps you focus on the things that you should be focusing on or that need your attention. And it can remove the mundane and it can, remove the the banal can remove the routine stuff. From your workflow. And so the way in which we're approaching it is one, this is a tool like any other.

We want to make sure that people have access to these tools and can use them safely, and understand the risks of using these types of tools, like any tool, they, you know, there can be good and bad to it. So it's important that the training is there for folks to understand how to use them effectively.

We also want to make sure that people, from, a career development perspective, have access to these tools because, there are, there are there is a norm happening amongst industry where folks have these tools and this is how they operate. And this is becoming a mode or, for doing business. And so to maintain our, our presence, we, we have an obligation to our, to our team to make sure that they have access to this industry.

Norm. But it also is important that we establish these processes, which I think is where government kind of will lead in in a lot of respects in terms of the, very safe and trustworthy adoption of these tools. This is where, you know, establishing guardrails of, best usage patterns of behavior. And, applications for these tools really comes in and that's where we have a really exciting dynamic with the team, where we're having conversations about the application of this in this workflow and how it works, where the human in the loop might reside.

And what are the specific processes that we want folks to ask themselves as they start to think about endpoints, tools. And so there's a huge variety of, different things that need to happen or that we are doing, from training to upskilling to technology acquisition, but then also in process development and best practice, establishment. I think I'd also call out, you know, the, the way in which we're adopting it is from a science perspective.

If we look at, things like login.gov, and, or, or research with the equity study, which is trying to understand how certain technologies, behave when tested against different socio, economic and demographic images. And do face matching softwares behave differently depending on what types of, images it is looking at?

Is an important question, not just for GSA, but for the general, federal workforce as well as the, the industry involved writ large. And so I think the leading through science and leading through our scientific inquiry to better understand how certain things behave and what you need to do to prevent, bias and inequitable solutions from making their way into solutions that have real impact on people's lives.

Is is incredibly important function for GSA to meet forward with. And so I'm super proud with things like the equity study, which are which are operating in, in the open, publishing their work and providing real benefit in a very timely manner to an important conversation, not only for our own benefit for, but for others as well.

Max Stempora
Well, it certainly sounds like things are moving in the right direction. Looking ahead, what comes next? Like, let's say over the next year or two years, what happens? Where does this go?

Zach Whitman
Yeah, we we are we are making a, a focused investment into our data infrastructure for GSA. We are making a focused investment on our data infrastructure so that we can empower the collective workforce to ask creative questions of our data and use these in a sustainable way. We are productizing our data product, our data sets. Or data assets so that they can be, understood by not only, the analysts on the ground, but also these systems like LLMs, which require context and enough content for it to be understandable by, these types of systems in, there's an interesting there's an interesting thing about with this new technology, we've found that

for it to be understandable by these new systems, you oftentimes need to take a step back and and consider how you can make your own internal data more human readable. Because these systems are designed to act and understand content like people. And oftentimes when you look at your data, sometimes it's really jargon. Maybe it's just a bunch of codes.

Maybe you have to go into a PDF that's like 400 pages long and find like that code book mapped with this. And it's been it's been really, really great for me because getting data into shape for people to use is really like my, my thing. That's what I really want to get done. And AI kind of helps me in that case, because the questions that an AI would have me would have, the LLMs of the data sets are the exact same thing that people would ask, what is this code mean? Like,

what is this column trying to tell me? All of that contextual information. The data labeling is really helping us data practitioners selfishly. It also helps in systems and it helps us build more sustainable pathways. But, if we humanize our data to make it more legible and understandable for these machines, the very weird thing that is happening and I'm I'm grateful for it.

Anything we can do to make our data more understandable, I’m down for. And, I'll use LLMs if I have to. That, it'll be something I'll use to. So what I hope to see in the future is a humanization of our stuff, in a way that makes it more approachable, makes it easier to use. We consider how we can make people more empowered to make smarter, informed, and faster decisions that are more accurate, more precise.

Max Stempora
Oh, that's a perfect way to end that. I can't say thank you enough for joining us today. I think our listeners are going to love the conversation. I think there was a lot there and hopefully it was easy to understand. I feel like I just took a class into the world of AI, its risk, its benefits, and how we're going to keep using it into the future.

So thanks for joining us. Well, that wraps up another episode of GSA Does That!? A huge thank you to Zach for spending some time with us. With the world of AI growing every day, it's great to hear that GSA is working to ensure these tools are being used responsibly. Stay tuned for our next episode as we once again explore the world of government contracting, and discuss the latest on transactional data reporting.


If you've never heard of TDR, the goal is to help government make data driven purchasing decisions. If you like what you hear or have something you want us to discuss on a future episode, don't hesitate to reach out. Our email address is gsadoesthat@gsa.gov, and remember to follow us on all your favorite podcasting platforms.


I’m Max Stempora filling in for Robert Trubia, GSA Does That!? is a production of the US General Services Administration, Office of Strategic Communications.

People on this episode