Robots in Procurement: The AI Decision Dilemma

Friday, May 24, 2024

We’ve all seen it by now.

The National Aeronautics and Space Administration (NASA) Solutions for Enterprise-Wide Procurement (SEWP) procurement team officially adopted a stance on using generative Artificial Intelligence (AI) in federal proposals, specifically its recently released SEWP VI Request for Proposal (RFP).

 

NASA SEWP VI RFP (Pg. 98)

Believe it or not, this is not the first time the federal government has made this request.

The responsible and ethical application of federal consultants and proposal development tools has been a long-discussed topic across our industry, as demonstrated in a 2015 U.S. Government Accountability Office (GAO) case, which upheld the Department of Veterans Affairs (VA) restriction on using consultants to develop proposed technical approaches. 

As generative AI tools, software, and providers for federal proposal development continue to rapidly develop, with new logos, press releases on funding rounds, and promises of a game-changing solution popping up daily, we find this topic back under the scorching light of the federal microscope.

AI and Federal Procurement. AI has catalyzed daily discussions, posts, and panels around the federal contracting ecosystem on its benefits, challenges, impacts, and optimal implementations to use it best. 

Here at The Pulse – we are not your typical millennials. That’s why we wanted to take on this subject. We are very slow to adopt (and sometimes resistant to) new technology and, truthfully, prefer to do things manually, especially when developing a federal proposal response.  

In full transparency, we have been the outspoken AI naysayers for a long time. Not because we were scared it would take away our jobs but because we had a hard time seeing how commercial innovators and federal laggards could form a meaningful alliance. 

Remember this 2016 GAO Report about how some federal government IT systems are over 50 years old?

But as the world keeps pushing us to evolve and adapt, we must admit that technology can do incredible things to shortcut cumbersome processes—hello, Instacart! However, as Proposal Compliance Evangelists, we see the red flags that generative AI tools present, begging the question: “How should we balance the use of AI while developing a federal proposal response?”

Here are three things our team recommends you consider before letting robots near your next federal proposal.

What About "Garbage In, Garbage Out"?

The first question any organization should ask is: what source(s) is the AI model ingesting? 

  • Are they reputable? 
  • How are they collected?
  • Are they curated or filtered? 
  • Are they up-to-date?
  • Are there known issues or limitations?
  • Do experts validate them?

All organizations must understand the content and training of their AI model. Generative AI models learn patterns from the data they ingest to make predictions, meaning that the quality of the AI systems’ output directly correlates with the quality of the input data received.

How does this impact you and your federal proposal? Here are a few consequences of junky and unreliable data sources:

  • Prediction and Decision Flaws: Generative AI algorithms analyze large datasets to learn patterns and relationships. If the training data is incomplete, imbalanced, or inaccurate, the AI model learns from those deficiencies, leading to flawed or biased predictions and decisions. 
  • Representation Bias: An AI model may only generalize well to new, unseen data if the training data represents the real-world scenarios the AI tool encounters. If real-world scenarios are not an element of the training, outputs are biased or skewed predictions that do not accurately reflect the true diversity of situations.
  • Data Drift: Over time, the statistical properties of a dataset used to train an AI model may change, degrading the model’s performance. If the model’s training is not regularly monitored, adapted, and improved upon, it becomes unreliable in real-world deployment scenarios.

Could We Cause an Identity Crisis?

If you are considering a generative AI tool for federal proposal writing, so are your competitors.

As these tools become more common, it’s easy to imagine a federal source selection committee tasked with evaluating a group of proposals that, other than names and graphics, begin to read the same narrative three or four times over. At a minimum, this raises an issue for Procurement Integrity (PIA) and the False Claims Act (FCA) for not only the federal government but also for federal vendors, including the vendor’s ability to stand out from the competition. 

To ensure your AI-drafted proposal remains individually grounded in your company’s brand, voice, and messaging – every organization needs to ensure its generative AI proposal tool does the following two things.

Delivers Domain Awareness. Your AI tool must apply document understanding to develop unique content demonstrating technical expertise. Translation? Your AI tool securely leverages your organization’s proprietary enterprise data, such as contextually relevant original content for proposals, white papers, CDRL deliverables, etc.

This is the opposite of what publicly available tools do, such as ChatGPT, which produces consistent, compelling, and persuasive text that closely mimics human writing—but only if the information already exists on the Internet. When faced with a direction to write about a new subject, tools like ChatGPT sometimes get creative with authoritative-seeming text and even populate it with official-sounding sources, including fictional names and titles. This is dangerous for any organization, especially for federal stakeholders and vendors, for apparent reasons. 

Complements Your Human Proposal Professionals. Our industry has long discussed how technology replaces the function of the proposal manager, writer, etc. Still, these tools need the human professionals just as much as the human professionals need the tools! It’s a symbiotic relationship. 

Think about it—practically, the best-trained language model likely won’t include every nuance of your company’s solution, the aspects of your past and current performance that appeal to the customer, or guarantee that your response is accurate and aligned with all legal compliance requirements. Only your experienced professionals can do that.

So please do us all a favor and don’t unquestioningly trust, propose, or submit what the AI robot has developed for you. Let a human still handle the development and review process.

Will Our Information Get Lost in the Ether?

When it comes to any AI tool, you need to understand where and how it transmits your inputs for processing. Legal questions we have come across in other publications are:

  • Is there any risk that, for example, the tool would use servers in locations where you couldn’t transmit the export-controlled information you would need to write your proposals for specific customers?
  • Would license terms permit the AI tool’s owner to use your inputs to the AI tool or the proposal draft generated by the tool to create proposal drafts for other contractors?
  • If yes, does the mere fact of this indirect access to proposals risk undermining assertions of trade secrets or exemptions from disclosure under the Freedom of Information Act (FOIA)?

These scary questions have massive implications for your organization and could create new rules and regulations for federal vendors in areas like Organizational Conflicts of Interest (OCI).

We recommend selecting a tool that does not expose company data to the internet to protect your company. This means no shared data or resources across customers and a singular, exclusive, and secured platform for each company entity. 

So, where do we go from here? Our industry finds itself in a significant gray area. Even though many federal agencies use AI in their acquisition processes, from solicitation development to proposal evaluation, many have publicly expressed opposing opinions regarding federal contractors using AI tools when developing proposal responses.

A recent example of this comes from NASA SEWP’s Program Director, Ms. Joanne Woytek, during their October 2023 Industry Day session. During this session, Ms. Woytek was transcribed saying the following…  

Picture3

Last year, the writing was on the wall for all potential NASA SEWP bidders, but what about other federal agencies? Will there be one sweeping regulation for all federal agencies regarding AI proposal tools? How will this be monitored?

As with the rest of the industry, our questions are endless. But as technology evolves, this debate will increasingly shape any federal vendor’s approach to the most critical document in our industry: a federal contract.

Using an AI federal proposal tool can be a valuable resource, but it’s essential to understand the risks and rewards to maximize its benefits effectively. Our best advice is to use your AI tool as a resource to outline or draft your pink team response. Then, let the humans get to work.

More From The Pulse