InnovationIntellectual PropertyTechnology Law / Cyber Law

Assessing Complexities in Intellectual Property Creation by Artificial Intelligence

I: Introduction

Humanity’s monopoly over creation and invention is at an end. It is undeniable that, amidst the Fourth Industrial Revolution, artificial intelligence (“AI”) has now become capable of producing speech (Wavenet by Deepmind), animation (Midas Creature), journalistic articles (Cyborg by Bloomberg), musical compositions (The Emily Howell Project), legal research and arguments (ROSS), visual masterpieces (The Rembrandt Project), and creative literary and cinematographic works, such as the Sunspring movie. In fact, machine learning and the integration of deep neural networks have hastened the pace with which general intelligent systems can be realized.[1] A testament to its breadth of potential inexhaustible expertise and computational creativity can be found in its ability to master board games involving complex decision making, such as Alpha Zero in Chess, automate driving through Tesla’s autopilot, AND be a cyber Instagram model, like Blawko and Bermuda.

Consequently, contemporary copyright and patent regimes are struggling to define an AI’s legal identity, assign economic or moral rights stemming from its creation of intellectual property (IP), and ascribe criminal and civil liability arising from damage and infringement respectively. Therefore, this article will postulate on the question of creation, ownership, and attribution of liability, and seek to tackle the operational and legal complexities of AI. 

II: Is Artificial Intelligence capable of creation, innovation, and ownership?

Weak AI derives no meaning from its instructions, is used mostly in semi- or autonomous systems to complete mechanical tasks, and are simply the result of their algorithmic function.[2] Strong AIs are smart, and employ expert systems, natural language processing, and perception systems, in tandem with the machine or deep learning and neural networks to continuously evolve.[3] To put succinctly, data is used to train the algorithm, which enables it to modify its code and improve the preliminary model.[4] Trained AI implementation in a cyber-physical system would ensure that levels of human intervention would be inversely proportional to the levels of automation. That isn’t to say that all robotic or algorithmic actions can be awarded intellectual protection for their outputs.[5]

  1. Creation of Copyright

Copyright protection is only granted to the author, if the work is based on originality, expression, and fixation.[6] Burrow Gilles Lithographic Co. v. Sarony[7] clearly established that purely mechanical works created by machines weren’t creative per se and thus couldn’t be awarded protection. The test for originality was even lowered in Alfred Bell & Co. vs. Catalda Fine Arts, Inc[8]., to allow accidental or unintentional variations from other artistic works of similar character to be copyrighted as well. The vastness of the scope of ‘work’ under Section 2(y) of the Copyright Act, 1957, the ambiguity related to what constitutes an ‘original’ work and the lack of any requirement for such a work to be of high quality or unique means that it is easy to be rewarded for creative endeavour. However, it is not so straightforward for entities that are creators but are not juristic or natural persons. 

Although Cummins v. Bond[9] made it clear that ‘the non-human nature of the source of a work’ would not be a bar for protection, Bleistein vs. Donaldson Lithographing Co.[10] later mandated that the ‘irreducible creative output of the human personality’ was a prerequisite for copyright. Lovelace also contends that AI’s rule-bound behavior prevents it from achieving the ability to be unpredictable, and therefore, creative.[11]None of the jurisprudential assertions or judicial precedents, however, were delivered bearing in mind-machine learning capabilities of contemporaneous AI.  

For example, OpenAI’s text generator[12] is extremely capable of writing poetry or prose, and even though its understanding of semantics and conversational structure is superfluous, it has gained creative tenets through pattern-based algorithmic improvement. Its tendency to rely on pre-existing datasets mimics human writers who process and derive inspiration from pre-existing works. The evidence thus suggests that while not all AI is capable of original and creative thought, some are.  

  1. Patentable Inventions

Patentability is based on industrial application, novelty, and the presence of inventive step as under Section 2(j) of the Patents Act, 1970, in addition to the nature of subject matter and the construction of the specifications.[13] The AI’s innovative capabilities are proven if it undertakes technological advancement which isn’t obvious to experts in the art and is of economic significance. Predictability, proximity to the programmer’s code directives, and the level of human intervention can help determine the extent of attribution of patents to AI. 

DABUS AI, created by the Artificial Inventor Project, was listed as the inventor when filing for patents on a warning light and a food container in August of 2019. DABUS was neither created to solve a particular problem nor was it skilfully fed particular data to assist in its innovation.[14] As a connectionist ‘Creativity Machine’, it employed feedback loops between two neural networks that evaluated ideas on their patentability criteria and the salience of the instant inventions.[15] Even momentarily considering the argument of the AI being restricted to its programming, any iterative or incremental improvements on the state of the art made by it during the routine execution of its algorithm, could potentially qualify for Patents of Addition[16] under Section 54. Although the UK and European Patent Offices considered the ‘inventions themselves worthy of patents’, the applications were rejected because the ‘inventor wasn’t a human’.[17]

  1. Personhood and Ownership

The fundamental problem lies with the fact that AI is presently ONLY considered as a tool for development,[18] and therefore lacks any juristic personality. Only entities with personhood can exercise their rights, enter into contractual arrangements, own tangible and intangible property, and be held liable and/or penalized. The originator of creations and inventions is envisaged as a ‘person’, which disqualifies AI from being able to own intellectual property.[19] Attributing legal fiction to AI would expose it to duties, rights and liability, which it is presently incapable of handling. 

In a similar vein, even though copyright subsists in computer software through Section 2(o) read with Section 13(a) of the Copyright Act, 1957, algorithms or programs aren’t inventions per se under Section 3(k) of the Patent Act, 1970. Guidelines 4.5.3 and 4.5.4 of the Guidelines for Examination of Computer Related Inventions (CRIs) also categorically state that algorithms and computer software per se are excluded from patentability.[20] Not only is this flimsy as it allows potential infringers to reproduce the output with a modified codebase, but the requirement to be attached to novel software to be patentable also means that AI existing on the cloud or on servers are excluded from adequate protection. 

The European Parliament even explored the prospect of bestowing smart autonomous robots, and by extension artificial intelligence, ‘electronic personhood’ through a Resolution on Civil Law Rules of Robotics,[21] but it was swiftly denounced for being inappropriate, unethical and bad in law. The experts further stated that electronic personality can’t derive from the Legal Entity Model as it implied the existence of human persons behind the corporate veil.[22] However, Shawn Bayern demonstrated how American Company law could be utilized to create a legal entity that is solely controlled by artificial intelligence, so the prospect of electronic personality can be reconsidered.[23]

But previous precedents such as Acohs Pvt Ltd vs. Ucorp Pvt Ltd.[24]Feist Publications v. Rural Telephone Service Company[25] and Infopaq International A/S v. Danske Dagbaldes Forening[26] were similar in their approach of identifying the mind as the origin of the human author’s intellectual creation, without which no copyright could subsist. Similarly, Townsend v. Smith[27] constructed the conception test for patent ownership, which requires the inventor to have a pre-conceived, permanent idea in his mind, prior to putting it into practice. 

While Section 2(d)(vi) of the Indian Copyright Act implements this by making the person who ‘causes the work to be created’, through computer generation; the Indian Patent Act, just as Section 101 and 102 of the US Patent Act,[28] is stubborn in requiring ‘persons’ to be considered as the true and first inventor. The UK and New Zealand have adopted similar provisions in their legislation.[29] Their legislative hesitation is understandable, given AI ownership of intellectual property opens Pandora’s box of infringement and liability claims.[30] In most cases, ownership has been dictated through the sweat of the brow doctrine[31] arising from Lokean ethics, or the work for hire doctrine. While the former involves the programmer being rewarded for his efforts through the rules of causation, intuition, and the principles of transitivity, the latter bestows the benefits on to the entity that contractually commissioned the work. The users also have ownership claims because of the unique output as a result of their unique input. However, AIs presently have no avenue to exclusively own IP, but to be considered as a joint-inventor.[32]

III: Is AI capable of being attributed liability?

Artificial Intelligence is also being deployed in scenarios that can prejudicially impact certain groups of people through automated credit scores, in auto-pilot features in cars requiring minimal human intervention and to conduct robotic surgery in life-threatening situations. AI isn’t yet infallible, and when its actions do cause damage, the issue of its liability arises. 

  1. Civil Liability[33] 

There is no practicality in imposing fines on an entity that holds no bank accounts, but if it were to be ascribed a juristic personality, it could be held liable strictly, and for the tort of negligence or breach of warranty. 

Responding to manufacturing software defects in a similar manner to physical design flaws in cars or electronic products is counter-intuitive, as the former can be resolved through over-the-air updates, while the latter requires mass withdrawal and substantial financial resources.[34] In such cases where damage is caused even if the AI is reasonably used, negligence is harder to deny due to strict liability. 

To prove negligence in a suit, a standard duty of care, its breach by the defendant, and the resultant damage to the plaintiff are required. Where the duty of care lay with the programmer to detect errors in the algorithmic functions, where the owner relied upon an incomplete or inaccurate knowledge base, or where the manufacturer failed to include adequate instructions or warnings or where the user-supplied faulty input or it was used inappropriately, their liability can be interlinked with their expected responsibilities. Internal contractual arrangements can define liability amongst the involved actors. Herein, even if the AI fails to possess personhood, the veil can be pierced to hold the ‘actual’ actor accountable for the breach. To that end, injury can be proven through an act or omission, but its liability again will be based on whether it simply recommends action or takes one on the user’s behalf. The extent of human intervention determines the standard of care- expert systems ought to mimic a professional or an expert with minimal intervention. The minimum requirement is that the AI software must either come with an express or implied warranty when being sold so as to be “satisfactory as described and fit for a reasonable time”.  Where “ultra-hazardous activities and instrumentalities” are involved, the manufactures and programmers must be able to provide ‘reasonable outputs from unreasonable inputs’. 

Infringement of copyrighted or patented material by the AI during its process of creation or invention is permitted in certain jurisdictions such as the US and UK under the ‘fair use provision’ and Section 16(2) of the Copyright Designs and Patents Act, 1988 respectively, if it isn’t ‘substantial’, does not produce commercial benefit, community-driven or charitable.[35] In all other cases, the proximity test must be conducted to gauge the control of the programmer or user over the AI and their closeness to the infringing activity. Section 16(1) explicitly restricts the making of even temporary copies of copyrighted work, which is crucial for an AI in its learning, and creation/invention process. 

  1. Criminal Liability[36]

The imposition of criminal liability poses a different challenge as the mens rea, or the guilty intention, that is key in most jurisdictions to prove a crime, is difficult to ascribe to one that does not possess a ‘mind’ to invent or create. Conversely, it is simple to attribute an actus reus to the AI. However, it is again dependent on the human-AI relationship and the probable consequence of its creation and use and must be concluded on a case-by-case basis. 

Both mens rea and actus reus can be attributed to any actor, who utilized the AI as an agent to perpetrate a crime. This includes situations where they weren’t actively involved, but they had ‘actual knowledge’ of a crime and failed to dispense their duties. In strict liability where no intent is required to be proven in the event of a breach, the act is sufficient for the AI to be held liable. In situations where an AI is used for erroneous purposes but wasn’t programmed for it, or when it was programmed to commit crimes, predictability, and intent behind its creation and use is considered. If the user misused a robot to incapacitate a co-worker, when it was intended for surgery; or if an AI was developed to crack bank accounts, they can either be considered innocent or be treated as an accomplice. 

The argument that the intent of the AI’s function can potentially be corrupted further through malware programs that poison the algorithm and repurpose the AI, deserves credence. Such a defense has successfully worked in the UK, wherein a person claimed that the child pornography found on his computer was a result of malware, and was done so without his knowledge. He was acquitted when multiple Trojan programs were found on his computer. Perceived innocence of the AI and the human actor means that the plaintiff can receive no justice – as the hacker must be tried under cyber laws- but it has its own challenges with implementation. It must be admitted that the law is too nascent to resolve some of the contemporary lacunae. 

  1. Fines and Punishment[37]

Assuming an AI infringed a patent, was negligent in discharging its duties or committed a criminal offense, Gabriel Hallevy, an author and a professor of law, explores certain routes to realize the appropriate civil and criminal liability, that is worth consideration. 

Dispensing punishments to AI and human actors must come after examining the following questions:

  • What is the fundamental significance of the specific punishment for a human? 
  • What is the impact of the punishment on the AI?
  • What practical adjustments must be made to punishment to achieve the same significance?

      Fines can either be collected from the human actors controlling the AI co-extensive to their involvement and obligations, or compensation can be provided to a third-party through labour or the state might collect the fine in the form of community service. Imprisonment can be imposed through a restriction of its ‘freedom’, which translates to placing an embargo on AI use for a determinate period of time and limiting their ability to fulfill their algorithmic function. Crimes demanding the death penalty would result in depriving its life, and would thus translate to deletion of the AI, decommission of the particular robot, and a review of other systems relying on the software.  

Hallevy’s assertion bridges the gap between the implementation of punishments by effectively treating AIs as legal entities, even though they have no such recognition. Understandably the flaw remains that the AI might possibly have no ‘significance’, or fear for anything, but only a purpose based on its programming. Therefore, the psycho-social impact of the penal system is lacking, which is a deterrent by itself. There is no elegant solution available presently, but certain measures can be taken to balance the need for regulation with the need for development.  

IV: Suggestions and Conclusions

The present paradigm of allowing the originator to retain the IP created by his AI initially encourages innovation, but it also leads to the monopolization of IP. Equity guarantees an originator the fruits of his labour, but they are not infinite. 

  • Amendment of Patent Laws: Patent Act, 1970 should be amended to allow algorithms and software to be patented per se and expand the definitions of ‘inventor’. Additional safeguards to prevent obvious code re-engineering through the Copyright Act will further enhance the protection provided. 
  • Recognition for Artificial Intelligence:  TRIPS must be amended to uniformly recognize the role of AI in creating IP, and legislation must be used to regulate artificial intelligence behavior.[38]
  • Reconsider the Concept of Electronic Personality: While the concept might still be premature, their growing intelligence will soon enable them to handle the rights and obligations of a legal entity. The nature and extent of human intervention will also evolve concurrently.[39]
  • Classification of AI: AI should be classified based on the extent of human intervention involved, integration of intelligent systems, and the nature of its processing. In this way, different legislative approaches can be adopted. 
  • Differential IP Treatment: Copyright has a more subjective standard, while Patent has a more objective and technical standard to be satisfied. The law should reflect a different metric when considering AI IP claims. 
  • Creation of a Separate Yardstick for AI Crime: Requirement of a mens rea is not appropriate for a machine. Its intent is based on its code and should be judged accordingly. 
  • Baking Ethics into AI: To prevent abuse in the form of infringement, data privacy violation inter alia, ethical principles must be embedded in its code. Asimov’s Laws of Robotics can act as a starting point.[40]
  • Open Source and Licensing: AI-created IP could also potentially be placed in the public domain for community benefit. One might argue that the originator would expect some reward for his labour, so a compulsory licensing system can be considered. The concept of Copyleft Licenses has been adopted in the UK wherein the author surrenders his rights to allow free distribution, and in doing so, ensures that that version of the software remains free for improvement and alteration.[41]
  • Creation of a Tiered Royalty System: Since the human actors aren’t the direct originators, their royalty must be altered to better reflect their role. The AI portion of the royalties can be made part of a trust fund. 

From the above analysis, it can be concluded that AI can create/invent, and can be held liable to a certain extent, but cannot own IP. Agreeably, the measures suggested above are piecemeal, it is truly impossible to definitely understand the thought process of AI, especially one that continuously evolves. The judicial precedents and present legislations are outdated on the topic. What must not be forgotten is that this is an opportunity to indirectly regulate human conduct in an area with immense potential for both benefit and harm. Any ‘reasonable’ person would see the need for proactive legislation to clarify the operational complexities in AI-created IP, contribute to AI development, and usher in general intelligence. 

This article can be cited as:

Bluebook, 20th edn.: “Rohit Hebbale, Assessing Complexities in Intellectual Property Creation by Artificial Intelligence, Metacept – InfoTech and IPR, accessible at”


[1]Naveen Joshi, How Far Are We From Achieving Artificial General Intelligence? Forbes (2019), (last visited Aug 1, 2020).

[2]E-3 Magazine, Strong vs. Weak Artificial Intelligence, e3zine (2020), (last visited Aug 1, 2020).

[3]Monika Shailesh, Artificial Intelligence: Facets & Its Tussle With IPR – Technology – India, Mondaq (2018), (last visited Aug 1, 2020).

[4]Daniel Lloyd, How do intellectual property rights apply to AI? Lexology (2019), (last visited Aug 1, 2020).

[5]Committee on Legal Affairs, PR INL – European Parliament Europarl.Europa (2016), (last visited Aug 1, 2020).

[6]Mahendra Kumar Sankar, Copyright Law in India, Legalserviceindia (2017), (last visited Aug 1, 2020).

[7] 111 U.S. 53 (1884)

[8] 191 F.2d 99 (2d Cir. 1951)

[9] [1927] 1 Ch 167

[10] 188 U.S. 239 (1903)

[11]Swapnil Tripathi & Chandni Ghatak, (PDF) Artificial Intelligence and Intellectual Property ResearchGate (2017), (last visited Aug 5, 2020).

[12]James Vincent, OpenAI has published the text-generating AI it said was too dangerous to share The Verge (2019), (last visited Aug 5, 2020).

[13]Dr. Kalyan C Kankanala, Patentability Requirements in India BananaIP Counsels (2019), (last visited Aug 5, 2020).

[14]Angela Chen, Can an AI be an inventor? Not yet. MIT Technology Review (2020), (last visited Aug 5, 2020).

[15]Ryan Abbott & Stephen Thaler, Patent Applications The Artificial Inventor Project (2020), (last visited Aug 5, 2020).

[16]Vilas Shetty, Patent of Addition: Strategy for incremental innovation S. Majumdar & Co. (2019), (last visited Aug 5, 2020).

[17]Thomas Burri, The EU is right to refuse legal personality for Artificial Intelligence (2018), (last visited Aug 5, 2020).

[18]Emma Woollacott, Who owns AI’s ideas? Disputing intellectual property rights Raconteur (2020), (last visited Aug 5, 2020).

[19]Manish Ranjan, Meaning and Kind of Persons Legal Services India (2013), (last visited Aug 5, 2020).

[20]Office of the Controller General of Patents, Designs and Trademarks, Guidelines for Examination of Computer Related Inventions (CRIs) (2017), (last visited Aug 6, 2020).

[21]Legal Affairs Committee, EUROPEAN CIVIL LAW RULES IN ROBOTICS (2016), (last visited Aug 7, 2020).

[22]Open Letter to the European Commission ‘Artificial Intelligence and Robotics’ , Available at  content/uploads/2018/04/RoboticsOpenLetter.pdf

[23]Shawn Bayern, The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems SSRN (2016), Page 102, (last visited Aug 7, 2020).

[24] [2010] FCA 577

[25] 499 U.S. 340 (1991)

[26] Case C-5/08,  [19 July 2009] ECR I-6569

[27] 36 F.2d 292,293 (1929)

[28]Liza  Vertinsky  &  Todd  M.  Rice,  Thinking  about  Thinking  Machines: Implications for  Machine  Inventors For  Patent  Law, B. U. J SCI. & TECH  L. 82 (2002),  Page 12,  scitech/  volume82/ vertinsky&rice.pdf, (last visited Aug 5, 2020).

[29]Yohan Liyanage & Kathy Berry, INSIGHT: Intellectual Property Challenges During an AI Boom, Bloomberg Law News (2019), (last visited Aug 5, 2020).

[30]Dan Robitzski, Can AI own a patent? weforum (2019), (last visited Aug 7, 2020).

[31] Ladbroke vs. William Hill [1964] 1 All ER 465

[32]Emma Woollacott, Who owns AI’s ideas? Disputing intellectual property rights Raconteur (2020), (last visited Aug 5, 2020).

[33]John Kingston, Artificial Intelligence and Legal Liability- ResearchGate (2016), (last visited Aug 6, 2020).

[34]John Villasenor, Products liability law as a way to address AI harms Brookings (2019), (last visited Aug 6, 2020).

[35]Leigh Smith, AI and IP: copyright infringement by AI-Systems (UK law) Talking Tech (2017),–copyright-infringement-by-ai-systems–uk-law-.html (last visited Aug 6, 2020).

[36]John K.C Kingston, Artificial Intelligence and Legal Liability, Researchgate (2016), Page 2-5, (last visited Aug 7, 2020).

[37]Prof. Gabriel Hallevy, The Criminal Liability of Artificial Intelligence Entities SSRN (2010), Page 39-42, (last visited Aug 7, 2020).

[38]Swapnil Tripathi & Chandni Ghatak, Artificial Intelligence and Intellectual Property, ResearchGate (2017), Page 96, (last visited Aug 7, 2020).

[39]Emma Woollacott, Who owns AI’s ideas? Disputing intellectual property rights Raconteur (2020), (last visited Aug 6, 2020).

[40]Committee on Legal Affairs, PR INL – European Parliament (2016), Page 14-16, (last visited Aug 7, 2020).

[41]UK Copyright Service, Copyleft- Fact Sheet P-20 (2019), (last visited Aug 6, 2020).


Related Articles

Leave a Reply