Setting Rules For AI Use With Teams And Vendors In Your Small Business

by Linda

Lisa Zeiderman, Esq., CDFA, CFL, a Managing Partner at Miller Zeiderman, LLP, is a matrimonial and family law attorney based in New York.

For small business owners, defining how teams and vendors use AI is an emerging challenge with an ever-expanding scope. In the most general sense, complications in setting universal policies arise from varying regulations and standards across industries, as well as differing definitions of what constitutes AI use.

For example, law firms may require stricter AI protocols to comply with privacy regulations. Even in less regulated industries, concerns such as plagiarism can arise. From there, the ethical and operational challenges only become more complex.

Given numerous well-documented issues like hallucinations, outdated information and ongoing legal and ethical concerns, it’s no surprise that business owners are struggling with how to best regulate AI use.

As a lawyer and law firm owner who’s navigating these issues for myself and the business owners I counsel, here’s what to be concerned about and what you can do about it.

The Struggle To Define AI Use

AI is pervasive. It’s everywhere, integrated into internet search engines and smartphones. It’s likewise present in grammar-checking software, giving rise to uncertainty about a writing’s origins as models generate suggestions a writer may choose to use in place of their own. Then there’s the question of ideation, outlining and editing software.

Depending on who you speak to, what constitutes AI use will differ. Who’s correct? It remains to be seen whether there will ever be a definition everyone can agree on. But one thing does remain certain: As a business owner, you will be the one held accountable.

To keep your business safe from technology that’s not fully vetted, regulated or even understood, you must become your own regulator, creating a set of rules that you are comfortable with, your employees and vendors can follow and that you can enforce with a modicum of accuracy (more about enforcement later). However, before establishing such rules, it’s wise to review the specific requirements regarding ethics, privacy and disclosure that apply to your industry.

Industry Rules Around Ethics, Privacy And Disclosure

Standards for ethics, privacy and disclosure vary by industry, as do the consequences for violating them. In technology, for example, patents exist to protect ideas and concepts still in development. In law, ethical rules, privacy protections and what an attorney must disclose are generally well established.

Even with limited use, AI models can pose risks by providing inaccurate or outdated information and failing to accurately cite sources. For business owners who have spent years devising systems and processes within their organizations to improve efficiency and growth, delegating tasks so they can focus on larger-scale initiatives can be unnerving, given the risks that doing so in an AI-driven world presents.

The unfortunate outcome is the need for small business owners to police work created by others, which slows workflows and runs the risk of lowering morale among once-trusted employees and vendors. Worse still, the policing methods (i.e., AI detection software) are still in the embryonic stages of development and can be flawed.

The Problems With Policing AI

Policing AI usage is far from an exact science. Results can vary depending on the type of AI detection software used and the word count of the text segment submitted for analysis. AI detectors can also be fooled. As a result, AI detection software can yield false positives and false negatives, potentially leading to unjustified accusations and potential damages to an employee or vendor. Still, the plot thickens.

When the origin of an idea is the focus, analysis becomes that much more problematic. Did an idea originate in the mind, or was it an AI response? Of course, it would be remiss of me to ignore the reality that the quality of output relates directly to what a human asks an AI chatbot, raising the question: Which came first, the chicken or the egg?

If you’re looking forward to better ways of detecting AI, the outlook for policing may be grim. As models continue to learn based on human usage and become more adept at simulating human thought, it’s conceivable that detection will become increasingly challenging, not less so.

So, what’s a small business owner to do?

How To Set Effective Guidelines Around AI

In business, you don’t typically have all of the answers before starting a venture. If founders waited for every box to be checked, there probably wouldn’t be any small businesses to speak of. In other words, this should not be your cue to wait in the wings. Instead, work with what you have in terms of knowledge and continue to wait on what you don’t have by monitoring developments in AI technology applications that can enhance your business’s capabilities and efficiency.

Beyond this, take stock in the fact that you know your industry and the rules and regulations in place. If you are no longer as familiar with those rules and regulations, now’s the time to reacquaint yourself with them, along with any new ones related to AI. Finally, assess your risk tolerance and begin building AI guidelines based on it. If your risk tolerance is low, create more rules for AI usage so you know where and how it’s being applied.

Guidelines, especially when emerging technology is involved, will invariably require tweaking. With this in mind, inform employees and vendors about your policies and any updates you make to them, so they can continue to produce work products that you can be proud of and trust. When in doubt, speak with an attorney experienced in small business ownership to ensure your rights and responsibilities are protected and remain so.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

You may also like

Leave a Comment