New Delhi: The US Space Force has temporarily halted the use of online generative AI tools, like ChatGPT, for its personnel, citing data security concerns.
A memo dated September 29, addressed to Space Force personnel known as “Guardians,” has imposed a ban on utilizing AI tools, including large language models, on government computers.
The restriction will persist until these tools obtain official approval from the Chief Technology and Innovation Office within the Space Force.
The temporary prohibition was attributed to “data aggregation risks,” according to the statement.
The utilization of generative AI, driven by expansive language models that analyze extensive datasets for learning, has significantly increased in the last year. This underpins the development of dynamic products, like OpenAI’s ChatGPT, capable of rapidly generating content such as text, images, or video based on a straightforward prompt.
Lisa Costa, the Space Force’s chief technology and innovation officer, mentioned in the memo that the technology “will undoubtedly revolutionize our workforce and enhance Guardian’s ability to operate at speed.”
An Air Force representative verified the temporary restriction, first disclosed by Bloomberg.
“A strategic pause on the use of Generative AI and Large Language Models within the US Space Force has been implemented as we determine the best path forward to integrate these capabilities into Guardians’ roles and the USSF mission, “stated Air Force spokesperson Tanya Downsworth.
“This is a temporary measure to protect the data of our service and Guardians,” she added.
Costa mentioned in the memo that her office established a task force on generative AI in collaboration with other Pentagon offices to explore responsible and strategic applications of the technology.
She also added that additional guidance on Space Force’s use of generative AI would be issued in the coming month.
(With inputs from agencies)