Entreat Advisory

Your SME Is Using AI Every Day — But Who Owns the Output?

Most SMEs do not make a conscious decision to adopt artificial intelligence. They simply start using it.

A marketing manager relies on ChatGPT to draft campaign copy. A designer uses AI tools to generate visuals or refine concepts. A sales team polishes proposals with automated assistants. A founder asks an AI platform to summarise strategy notes, draft internal documents, or help think through a commercial problem.

It feels efficient. Invisible. Harmless. And yet, for a growing number of South African SMEs, this quiet, everyday use of AI is becoming one of the most significant unrecognised sources of intellectual property and governance risk in the business.

Not because AI itself is inherently dangerous, but because ownership, control, and accountability are often assumed rather than examined.

AI tools have crossed a threshold in SMEs. They are no longer experimental or peripheral. They are operational. They are embedded in day-to-day workflows, decision-making, and value creation. Content is produced faster. Internal systems are refined more quickly. Ideas are developed, tested, and documented at a pace that would have been unthinkable a few years ago.

What has not kept up is the way businesses think about ownership and responsibility.

Traditional assumptions about authorship and creation are becoming blurred. Work that once had a clear human origin now emerges through a combination of prompts, platforms, refinement, and reuse. At the same time, very few SMEs have paused to consider how AI-generated or AI-assisted work fits into their existing intellectual property framework or governance structures.

In South Africa, where IP protection depends heavily on clear ownership, assignment, and documentation, this gap matters. If a business cannot explain, with confidence, who owns what it is creating, enforcing those rights later becomes difficult, and in some cases impossible.

The uncomfortable truth is that using AI does not automatically vest ownership in the company. Depending on the tool being used, its terms of service, how employees or contractors interact with it, and how outputs are integrated into the business, SMEs may be creating assets that are not clearly owned by the company at all. In some cases, ownership may sit partially with individuals. In others, outputs may be exposed to third-party rights or be difficult to protect in any meaningful way.

South African intellectual property law still relies on concepts such as authorship, originality, and assignment. AI does not fit neatly into those categories. Institutions such as the Companies and Intellectual Property Commission record rights; they do not resolve ambiguity. If ownership is unclear inside the business, it will remain unclear outside it.

In practice, exposure tends to cluster in predictable areas. Marketing and brand assets are often the most visible. AI-generated copy, visuals, slogans, and campaign materials are published under company brands without anyone stopping to consider originality, reuse restrictions, or internal ownership. If those assets later become valuable, the lack of clarity tends to surface at exactly the wrong moment.

Internal tools and processes are another blind spot. Founders and teams routinely use AI to refine workflows, generate internal documentation, and create templates or systems that materially improve efficiency. These are often core competitive assets, yet they are rarely treated as intellectual property in their own right. Software and technical outputs present similar risks. Code that is generated or assisted by AI may be integrated into products, licensed to clients, or relied on operationally, without any thought given to how that code would be viewed during an audit, investment process, or exit.

Data and confidentiality issues run quietly alongside all of this. Employees, usually without malice, upload internal information into AI platforms simply because it is convenient. Client information, pricing structures, strategic plans, and proprietary processes are fed into systems whose downstream use is not fully understood. This is not a technology failure. It is a governance one.

Most SMEs respond to AI-related concerns by focusing on the tool itself. Which platform should we use? Is it secure? Is it reliable? Those questions matter, but they are not the real issue. The real questions are governance questions. Who is authorised to use AI tools for business purposes? For what types of work? Under what conditions? And how are the outputs treated from a legal and commercial perspective?

The principles articulated in King IV emphasise accountability, oversight, and risk management, even for smaller entities. AI simply exposes where those principles were never properly operationalised. Good SME governance does not ban tools or stifle innovation. It sets boundaries. It allocates responsibility. It ensures that value creation does not run ahead of protection.

Addressing this does not require complexity or committees. It requires clarity. Businesses need to be deliberate about what counts as company intellectual property, even when AI is involved. They need to be explicit about how AI-assisted outputs are approved, stored, reused, and protected. Silence in these areas creates assumptions, and assumptions almost always lead to disputes.

Employment and contractor arrangements need similar attention. If employees or contractors are using AI tools as part of their work, intellectual property assignments must still be clear, confidentiality obligations must still apply, and tool usage must align with company expectations. The fact that “AI helped” does not remove the need for discipline.

Simple usage rules are often enough. Clear guidance on what information may not be uploaded, which platforms may be used, and when AI-generated material requires review can significantly reduce exposure without slowing the business down. Most importantly, AI outputs need to be treated like any other business asset. If something supports revenue, differentiates the business, or is core to operations, it deserves the same protection mindset as any other form of intellectual property.

The same mistakes appear repeatedly. SMEs assume AI outputs are free or ownerless. Employees decide tool usage without guidance. Founders ignore AI use until investors or clients raise concerns. Large corporate AI policies are copied wholesale, without regard for SME reality. AI is treated as an IT issue rather than a governance one. Each of these mistakes compounds quietly, often unnoticed until the cost of correction is high.

AI itself is not the risk. Unexamined use is.

SMEs have always innovated before they formalised. AI simply accelerates that pattern. The businesses that will benefit most from these tools are not the most aggressive adopters, but the ones that pause long enough to ask what they are creating, who owns it, and how it can be protected without undermining momentum.

That pause is not fear-driven. It is disciplined growth.

Scroll to Top