top of page
  • Writer's pictureHBS AngelsNYC

The "Prime Directive" for AI

Author - Jason Klein (MBA '86) - Founder & CEO On Grid Ventures, Chairman of HBS Alumni Angels of Greater New York


The writers of Star Trek in the 1960s recognized the potential for technology to wreck the course of civilization, so they created the Prime Directive to prohibit Starfleet crews from doing anything to “interfere in the normal development of any society.”


Many observers today see the potential for generative artificial intelligence to damage civilization on earth, but their remedies are far less eloquent and clear. The rapid deployment of ChatGPT and other systems in their current unfinished state is proceeding unchecked. AI ethicists have produced long lists of ethical rules for AI that are too arbitrary and complicated for current systems to implement, and seen by many to be too “woke.” The state of AI ethics is a mess, with Microsoft dissolving its 30-person AI Ethics and Society team and Google firing the head of its AI ethics team.


We need a “Prime Directive” for AI. Something simple and clear which speaks to the technology today and in the future, and is enforceable.


Something like this:


The Prime Directive for AI: No killing, no lies, no deception.



No Killing. With the advent of autonomously AI-piloted drones and robots, the gravest threat to civilization arises when a readily reproduceable AI-controlled device independently decides to kill, and then acts. This is different from a warhead with a smart navigation system ordered by a human to attack a specific target at a specific time. Since the advent of the nuclear age, human intervention has prevented simple errors from causing Armageddon, most notably when Soviet Lt. Colonel Stanislav Petrov disobeyed nuclear launch orders in 1983 arising from a mistaken signal that the US had launched a warhead.


AI systems like ChatGPT lack any similar safeguards. All generative AI systems should be built with an absolute prohibition of killing any living thing, most especially right now when the technology is just emerging from its infancy.


No Lies. A more present threat arises from the falsehoods that are generated by current AI systems. ChatGPT can create elaborate and well written narratives which are completely fabricated. Editorial cartoonist Ted Rall reported that ChatGPT created an utterly fictitious account of a 2006 trip to Uganda (he’s never been to Uganda) and a public feud with his best friend (which never happened). It took ChatGPT just a few seconds to relate to me that “as a young boy, Thomas Jefferson chopped down his father’s apple tree with a hatchet,” confusing the famous “never tell a lie” story about George Washington with one it just made up about Thomas Jefferson.


The next version of Microsoft Office makes it easy to add ChatGPT narratives to word documents, and further automates and accelerates the potential for “fake news” to bedevil our society. While Free Speech is valued in America, and protected by the First Amendment, even publishers are liable when they knowingly publish falsehoods and slander. At minimum, Generative AI needs to be held to a similar ethical and legal standard.


No Deception. It was widely reported that GPT-4 was able to pose as a human and convince real people to complete a Captcha “I’m not a robot” test. And many of us have been frustrated by voice recognition systems and web chats that appear human but drain our patience with their in-humanity. Any output from generative AI needs to be clearly labeled as computer-generated, and not from a living being. As the interfaces become more advanced – text, voice, robotic – the potential for generative AI to masquerade as human and control human behavior is frightening.


* * *


These three basic principles — No Killing, No Lies, No Deception — need to be a bedrock of the design of any generative AI system. Ideally, the tech industry itself should establish these as part of a Prime Directive and not release or propagate AI systems which don’t follow these basic guardrails. Congress and the EU should also review current laws and make clear that companies and individuals that propagate AI systems and outputs that violate these standards are liable for the consequences.


The late Stephen Hawking predicted in 2014 that AI could end mankind. Current AI ethics “experts” have muddled this debate and gotten lost in the weeds. It’s time to focus on some core inviolate rules for AI before it’s too late.



Author - Jason Klein (MBA '86) - Founder & CEO On Grid Ventures, Chairman of HBS Alumni Angels of Greater New York.



Jason is an experienced media CEO and builder of digital and traditional businesses who has led two successful turnarounds. He is founder and CEO of On Grid Ventures, an investment and advisory firm with a portfolio of 20+ early stage companies. Mr. Klein is co-President of the HBS Angels Alumni Association. Mr. Klein was previously CEO of Newspaper National Network LP; and CEO of Times Mirror Magazines/Time4 Media, publisher of Golf Magazine, Field & Stream, and Popular Science. Mr. Klein is one of Alleywatch's “25 Angels Investors in NY You Need to Know,” a member of New York Angels, and mentor at several NYC-area incubators.




bottom of page