Artificial Intelligence and Nuclear Energy: What Could Go Wrong?

So if the plan is to not just subsidize nuclear power in pursuit of more A.I. but also to encourage energy companies to use A.I. to slash red tape in pursuit of more nuclear power, what might that look like exactly? Terra Praxis co-CEO Eric Ingersoll did—sort of—explain this to the Journal: “What we’re doing here is training a [large language model] on very specific highly structured documents to produce another highly structured document almost identical to previous documents.”

If you’ve spent much time tinkering with ChatGPT, you’ve probably figured out that even if it’s producing a nice document, it’s not doing so through “reasoning or factual evidence,” as Khlaaf explained. Instead, LLMs use probabilities to fill in gaps with whatever seems likely to go in a given gap, without worrying about issues like, what if nothing should go there? This tendency to make things up is what an A.I. “hallucination” is.

“This is precisely why A.I. algorithms are notoriously flawed, with high error rates observed across applications that require precision, accuracy, and safety-criticality,” Khlaaf told me. You’ve probably seen what these systems can do and, more to the point, what their limits are. There aren’t secret A.I. systems out there that can be trusted with nuclear safety, “even if an A.I. were to only be specifically trained on nuclear documentation,” Khlaaf explained. “Producing highly structured documents for safety-critical systems is not in fact a box-ticking exercise. It is actually a safety process within itself.”

Leave a Reply