Press "Enter" to skip to content

US Space Force Enacts Temporary Prohibition on Generative AI Use


The US Space Force temporarily restricts generative AI tool use, citing cybersecurity concerns. The decision impacts “Ask Sage” users and raises tech lag concerns.

Key Takeaways

  • The US Space Force temporarily suspends the use of generative AI tools by its personnel.
  • Deputy Chief Lisa Costa acknowledges AI’s potential but raises cybersecurity concerns.
  • At least 500 users of “Ask Sage,” an AI platform, are impacted by the decision.
  • Nick Chaillan, ex-chief software architect for the US Air Force, criticizes the move, foreseeing a tech lag against China.

In a maneuver aimed at ensuring the safeguarding of governmental data, the United States Space Force has temporarily barred its personnel, the Guardian Workforce, from employing generative artificial intelligence (AI) tools in their duties.

A directive highlighted by a Bloomberg report dated October 12, communicated that the personnel were “not authorized” to utilize online generative AI tools for generating text, visuals, and other media forms during their service.

US Space Onto AI

Despite acknowledging the transformative potential of generative AI, with Lisa Costa, the Space Force’s Deputy Chief of Space Operations for Technology and Innovation, stating, “Generative AI will undoubtedly revolutionize our workforce and enhance Guardian’s ability to operate at speed,” a prevailing wariness towards cybersecurity and data management standards has brought about a pause. Costa emphasized the necessity of adopting AI and large language model (LLM) technologies in a manner that is more “responsible”.

The decision to restrict generative AI usage has already impacted “Ask Sage”, a generative AI platform, affecting at least 500 of its users, as per insights from Nick Chaillan, former Chief Software Architect for the US Air Force and Space Force.

Chaillan Point of View

Voicing his criticism towards the Space Force’s decision, Chaillan asserted, “This is going to put us years behind China,” further labelling it as a “shortsighted decision” in a September email addressed to Costa and other high-ranking defense officials. Moreover, Chaillan highlighted that generative AI tools, which align with data security norms, have been developed by the Central Intelligence Agency and its segments.

This move by the Space Force is reflective of a broader, global apprehension towards LLMs potentially leaking confidential information to the public domain. Notably, in March, Italy instituted a temporary ban on an AI chatbot, ChatGPT, over presumptions of data privacy law breaches, only to revoke it approximately a month later. Major corporations, such as Apple, Amazon, and Samsung, have similarly enforced restrictions or outright bans on the employment of ChatGPT-like AI tools by their staff.

Concluding Thoughts

The US Space Force’s cautious stance towards generative AI, while prudent from a cybersecurity viewpoint, does spark an intriguing discourse on balancing technological advancement with security in a digitized age.

The apprehensions, especially within sectors dealing with sensitive information, aren’t unfounded, considering the nascent stage of ethical and secure AI development. However, sidelining generative AI could potentially stifle innovation and operational efficiencies, giving adversaries an unintended advantage in the global digital arena.

This move prompts a pivotal question for all organizations in our increasingly digital world: How do we navigate the tightrope between leveraging innovative technology and ensuring robust data security? Finding this equilibrium will be pivotal for maintaining technological competitiveness on the global stage, particularly in security-centric sectors like the armed forces.