How to talk about AI risk without sounding like a nutjob?

One of the biggest problems I have personally struggled with is how to talk about AI safety without coming off as a madman. To the uninitiated this community comes off like a cult. I've heard it called the rapture of the nerds. Yet AI Safety is also reasonable. It is based on rationality. Even the extreme proposition that AI safety is literally the most important thing is reasonable.

People who work in this field work like cloistered monks. Although it's all in the public view, what they work on is not advertised very well. The amount of highly specific AI Alignment lingo gives the field the feeling of esotericism. This further makes it inaccessible to the normal human.

This is important if one wants to reach national policymakers. People in this field are actively avoiding reaching policymakers because they don't want to start some kind of arms race. I don't completely understand this. There is no real commercial incentive for AI safety research. I don't trust the private sector to really care for it. Regulation and safety are typically government concerns. But, as someone that has been on the inside, I can say the government is nowhere to be found in AI safety. I think part of it is the inability to talk about it without sounding dry and passive like government-speak typically is.