InnovationIntellectual PropertyTechnology Law / Cyber Law

Assessing Complexities in Intellectual Property Creation by Artificial Intelligence

I: Introduction

Humanity’s monopoly over creation and invention is at an end. It is undeniable that, amidst the Fourth Industrial Revolution, artificial intelligence (“AI”) has now become capable of producing speech (Wavenet by Deepmind), animation (Midas Creature), journalistic articles (Cyborg by Bloomberg), musical compositions (The Emily Howell Project), legal research and arguments (ROSS), visual masterpieces (The Rembrandt Project), and creative literary and cinematographic works, such as the Sunspring movie. In fact, machine learning and the integration of deep neural networks have hastened the pace with which general intelligent systems can be realized.[1] A testament to its breadth of potential inexhaustible expertise and computational creativity can be found in its ability to master board games involving complex decision making, such as Alpha Zero in Chess, automate driving through Tesla’s autopilot, AND be a cyber Instagram model, like Blawko and Bermuda.

Consequently, contemporary copyright and patent regimes are struggling to define an AI’s legal identity, assign economic or moral rights stemming from its creation of intellectual property (IP), and ascribe criminal and civil liability arising from damage and infringement respectively. Therefore, this article will postulate on the question of creation, ownership, and attribution of liability, and seek to tackle the operational and legal complexities of AI. 

II: Is Artificial Intelligence capable of creation, innovation, and ownership?

Weak AI derives no meaning from its instructions, is used mostly in semi- or autonomous systems to complete mechanical tasks, and are simply the result of their algorithmic function.[2] Strong AIs are smart, and employ expert systems, natural language processing, and perception systems, in tandem with the machine or deep learning and neural networks to continuously evolve.[3] To put succinctly, data is used to train the algorithm, which enables it to modify its code and improve the preliminary model.[4] Trained AI implementation in a cyber-physical system would ensure that levels of human intervention would be inversely proportional to the levels of automation. That isn’t to say that all robotic or algorithmic actions can be awarded intellectual protection for their outputs.[5]

  1. Creation of Copyright

Copyright protection is only granted to the author, if the work is based on originality, expression, and fixation.[6] Burrow Gilles Lithographic Co. v. Sarony[7] clearly established that purely mechanical works created by machines weren’t creative per se and thus couldn’t be awarded protection. The test for originality was even lowered in Alfred Bell & Co. vs. Catalda Fine Arts, Inc[8]., to allow accidental or unintentional variations from other artistic works of similar character to be copyrighted as well. The vastness of the scope of ‘work’ under Section 2(y) of the Copyright Act, 1957, the ambiguity related to what constitutes an ‘original’ work and the lack of any requirement for such a work to be of high quality or unique means that it is easy to be rewarded for creative endeavour. However, it is not so straightforward for entities that are creators but are not juristic or natural persons. 

Although Cummins v. Bond[9] made it clear that ‘the non-human nature of the source of a work’ would not be a bar for protection, Bleistein vs. Donaldson Lithographing Co.[10] later mandated that the ‘irreducible creative output of the human personality’ was a prerequisite for copyright. Lovelace also contends that AI’s rule-bound behavior prevents it from achieving the ability to be unpredictable, and therefore, creative.[11]None of the jurisprudential assertions or judicial precedents, however, were delivered bearing in mind-machine learning capabilities of contemporaneous AI.  

For example, OpenAI’s text generator[12] is extremely capable of writing poetry or prose, and even though its understanding of semantics and conversational structure is superfluous, it has gained creative tenets through pattern-based algorithmic improvement. Its tendency to rely on pre-existing datasets mimics human writers who process and derive inspiration from pre-existing works. The evidence thus suggests that while not all AI is capable of original and creative thought, some are.  

  1. Patentable Inventions

Patentability is based on industrial application, novelty, and the presence of inventive step as under Section 2(j) of the Patents Act, 1970, in addition to the nature of subject matter and the construction of the specifications.[13] The AI’s innovative capabilities are proven if it undertakes technological advancement which isn’t obvious to experts in the art and is of economic significance. Predictability, proximity to the programmer’s code directives, and the level of human intervention can help determine the extent of attribution of patents to AI. 

DABUS AI, created by the Artificial Inventor Project, was listed as the inventor when filing for patents on a warning light and a food container in August of 2019. DABUS was neither created to solve a particular problem nor was it skilfully fed particular data to assist in its innovation.[14] As a connectionist ‘Creativity Machine’, it employed feedback loops between two neural networks that evaluated ideas on their patentability criteria and the salience of the instant inventions.[15] Even momentarily considering the argument of the AI being restricted to its programming, any iterative or incremental improvements on the state of the art made by it during the routine execution of its algorithm, could potentially qualify for Patents of Addition[16] under Section 54. Although the UK and European Patent Offices considered the ‘inventions themselves worthy of patents’, the applications were rejected because the ‘inventor wasn’t a human’.[17]

  1. Personhood and Ownership

The fundamental problem lies with the fact that AI is presently ONLY considered as a tool for development,[18] and therefore lacks any juristic personality. Only entities with personhood can exercise their rights, enter into contractual arrangements, own tangible and intangible property, and be held liable and/or penalized. The originator of creations and inventions is envisaged as a ‘person’, which disqualifies AI from being able to own intellectual property.[19] Attributing legal fiction to AI would expose it to duties, rights and liability, which it is presently incapable of handling. 

In a similar vein, even though copyright subsists in computer software through Section 2(o) read with Section 13(a) of the Copyright Act, 1957, algorithms or programs aren’t inventions per se under Section 3(k) of the Patent Act, 1970. Guidelines 4.5.3 and 4.5.4 of the Guidelines for Examination of Computer Related Inventions (CRIs) also categorically state that algorithms and computer software per se are excluded from patentability.[20] Not only is this flimsy as it allows potential infringers to reproduce the output with a modified codebase, but the requirement to be attached to novel software to be patentable also means that AI existing on the cloud or on servers are excluded from adequate protection. 

The European Parliament even explored the prospect of bestowing smart autonomous robots, and by extension artificial intelligence, ‘electronic personhood’ through a Resolution on Civil Law Rules of Robotics,[21] but it was swiftly denounced for being inappropriate, unethical and bad in law. The experts further stated that electronic personality can’t derive from the Legal Entity Model as it implied the existence of human persons behind the corporate veil.[22] However, Shawn Bayern demonstrated how American Company law could be utilized to create a legal entity that is solely controlled by artificial intelligence, so the prospect of electronic personality can be reconsidered.[23]

But previous precedents such as Acohs Pvt Ltd vs. Ucorp Pvt Ltd.[24]Feist Publications v. Rural Telephone Service Company[25] and Infopaq International A/S v. Danske Dagbaldes Forening[26] were similar in their approach of identifying the mind as the origin of the human author’s intellectual creation, without which no copyright could subsist. Similarly, Townsend v. Smith[27] constructed the conception test for patent ownership, which requires the inventor to have a pre-conceived, permanent idea in his mind, prior to putting it into practice. 

While Section 2(d)(vi) of the Indian Copyright Act implements this by making the person who ‘causes the work to be created’, through computer generation; the Indian Patent Act, just as Section 101 and 102 of the US Patent Act,[28] is stubborn in requiring ‘persons’ to be considered as the true and first inventor. The UK and New Zealand have adopted similar provisions in their legislation.[29] Their legislative hesitation is understandable, given AI ownership of intellectual property opens Pandora’s box of infringement and liability claims.[30] In most cases, ownership has been dictated through the sweat of the brow doctrine[31] arising from Lokean ethics, or the work for hire doctrine. While the former involves the programmer being rewarded for his efforts through the rules of causation, intuition, and the principles of transitivity, the latter bestows the benefits on to the entity that contractually commissioned the work. The users also have ownership claims because of the unique output as a result of their unique input. However, AIs presently have no avenue to exclusively own IP, but to be considered as a joint-inventor.[32]

III: Is AI capable of being attributed liability?

Artificial Intelligence is also being deployed in scenarios that can prejudicially impact certain groups of people through automated credit scores, in auto-pilot features in cars requiring minimal human intervention and to conduct robotic surgery in life-threatening situations. AI isn’t yet infallible, and when its actions do cause damage, the issue of its liability arises. 

  1. Civil Liability[33] 

There is no practicality in imposing fines on an entity that holds no bank accounts, but if it were to be ascribed a juristic personality, it could be held liable strictly, and for the tort of negligence or breach of warranty. 

Responding to manufacturing software defects in a similar manner to physical design flaws in cars or electronic products is counter-intuitive, as the former can be resolved through over-the-air updates, while the latter requires mass withdrawal and substantial financial resources.[34] In such cases where damage is caused even if the AI is reasonably used, negligence is harder to deny due to strict liability. 

To prove negligence in a suit, a standard duty of care, its breach by the defendant, and the resultant damage to the plaintiff are required. Where the duty of care lay with the programmer to detect errors in the algorithmic functions, where the owner relied upon an incomplete or inaccurate knowledge base, or where the manufacturer failed to include adequate instructions or warnings or where the user-supplied faulty input or it was used inappropriately, their liability can be interlinked with their expected responsibilities. Internal contractual arrangements can define liability amongst the involved actors. Herein, even if the AI fails to possess personhood, the veil can be pierced to hold the ‘actual’ actor accountable for the breach. To that end, injury can be proven through an act or omission, but its liability again will be based on whether it simply recommends action or takes one on the user’s behalf. The extent of human intervention determines the standard of care- expert systems ought to mimic a professional or an expert with minimal intervention. The minimum requirement is that the AI software must either come with an express or implied warranty when being sold so as to be “satisfactory as described and fit for a reasonable time”.  Where “ultra-hazardous activities and instrumentalities” are involved, the manufactures and programmers must be able to provide ‘reasonable outputs from unreasonable inputs’. 

Infringement of copyrighted or patented material by the AI during its process of creation or invention is permitted in certain jurisdictions such as the US and UK under the ‘fair use provision’ and Section 16(2) of the Copyright Designs and Patents Act, 1988 respectively, if it isn’t ‘substantial’, does not produce commercial benefit, community-driven or charitable.[35] In all other cases, the proximity test must be conducted to gauge the control of the programmer or user over the AI and their closeness to the infringing activity. Section 16(1) explicitly restricts the making of even temporary copies of copyrighted work, which is crucial for an AI in its learning, and creation/invention process. 

  1. Criminal Liability[36]

The imposition of criminal liability poses a different challenge as the mens rea, or the guilty intention, that is key in most jurisdictions to prove a crime, is difficult to ascribe to one that does not possess a ‘mind’ to invent or create. Conversely, it is simple to attribute an actus reus to the AI. However, it is again dependent on the human-AI relationship and the probable consequence of its creation and use and must be concluded on a case-by-case basis. 

Both mens rea and actus reus can be attributed to any actor, who utilized the AI as an agent to perpetrate a crime. This includes situations where they weren’t actively involved, but they had ‘actual knowledge’ of a crime and failed to dispense their duties. In strict liability where no intent is required to be proven in the event of a breach, the act is sufficient for the AI to be held liable. In situations where an AI is used for erroneous purposes but wasn’t programmed for it, or when it was programmed to commit crimes, predictability, and intent behind its creation and use is considered. If the user misused a robot to incapacitate a co-worker, when it was intended for surgery; or if an AI was developed to crack bank accounts, they can either be considered innocent or be treated as an accomplice. 

The argument that the intent of the AI’s function can potentially be corrupted further through malware programs that poison the algorithm and repurpose the AI, deserves credence. Such a defense has successfully worked in the UK, wherein a person claimed that the child pornography found on his computer was a result of malware, and was done so without his knowledge. He was acquitted when multiple Trojan programs were found on his computer. Perceived innocence of the AI and the human actor means that the plaintiff can receive no justice – as the hacker must be tried under cyber laws- but it has its own challenges with implementation. It must be admitted that the law is too nascent to resolve some of the contemporary lacunae. 

  1. Fines and Punishment[37]

Assuming an AI infringed a patent, was negligent in discharging its duties or committed a criminal offense, Gabriel Hallevy, an author and a professor of law, explores certain routes to realize the appropriate civil and criminal liability, that is worth consideration. 

Dispensing punishments to AI and human actors must come after examining the following questions:

  • What is the fundamental significance of the specific punishment for a human? 
  • What is the impact of the punishment on the AI?
  • What practical adjustments must be made to punishment to achieve the same significance?

      Fines can either be collected from the human actors controlling the AI co-extensive to their involvement and obligations, or compensation can be provided to a third-party through labour or the state might collect the fine in the form of community service. Imprisonment can be imposed through a restriction of its ‘freedom’, which translates to placing an embargo on AI use for a determinate period of time and limiting their ability to fulfill their algorithmic function. Crimes demanding the death penalty would result in depriving its life, and would thus translate to deletion of the AI, decommission of the particular robot, and a review of other systems relying on the software.  

Hallevy’s assertion bridges the gap between the implementation of punishments by effectively treating AIs as legal entities, even though they have no such recognition. Understandably the flaw remains that the AI might possibly have no ‘significance’, or fear for anything, but only a purpose based on its programming. Therefore, the psycho-social impact of the penal system is lacking, which is a deterrent by itself. There is no elegant solution available presently, but certain measures can be taken to balance the need for regulation with the need for development.  

IV: Suggestions and Conclusions

The present paradigm of allowing the originator to retain the IP created by his AI initially encourages innovation, but it also leads to the monopolization of IP. Equity guarantees an originator the fruits of his labour, but they are not infinite. 

  • Amendment of Patent Laws: Patent Act, 1970 should be amended to allow algorithms and software to be patented per se and expand the definitions of ‘inventor’. Additional safeguards to prevent obvious code re-engineering through the Copyright Act will further enhance the protection provided. 
  • Recognition for Artificial Intelligence:  TRIPS must be amended to uniformly recognize the role of AI in creating IP, and legislation must be used to regulate artificial intelligence behavior.[38]
  • Reconsider the Concept of Electronic Personality: While the concept might still be premature, their growing intelligence will soon enable them to handle the rights and obligations of a legal entity. The nature and extent of human intervention will also evolve concurrently.[39]
  • Classification of AI: AI should be classified based on the extent of human intervention involved, integration of intelligent systems, and the nature of its processing. In this way, different legislative approaches can be adopted. 
  • Differential IP Treatment: Copyright has a more subjective standard, while Patent has a more objective and technical standard to be satisfied. The law should reflect a different metric when considering AI IP claims. 
  • Creation of a Separate Yardstick for AI Crime: Requirement of a mens rea is not appropriate for a machine. Its intent is based on its code and should be judged accordingly. 
  • Baking Ethics into AI: To prevent abuse in the form of infringement, data privacy violation inter alia, ethical principles must be embedded in its code. Asimov’s Laws of Robotics can act as a starting point.[40]
  • Open Source and Licensing: AI-created IP could also potentially be placed in the public domain for community benefit. One might argue that the originator would expect some reward for his labour, so a compulsory licensing system can be considered. The concept of Copyleft Licenses has been adopted in the UK wherein the author surrenders his rights to allow free distribution, and in doing so, ensures that that version of the software remains free for improvement and alteration.[41]
  • Creation of a Tiered Royalty System: Since the human actors aren’t the direct originators, their royalty must be altered to better reflect their role. The AI portion of the royalties can be made part of a trust fund. 

From the above analysis, it can be concluded that AI can create/invent, and can be held liable to a certain extent, but cannot own IP. Agreeably, the measures suggested above are piecemeal, it is truly impossible to definitely understand the thought process of AI, especially one that continuously evolves. The judicial precedents and present legislations are outdated on the topic. What must not be forgotten is that this is an opportunity to indirectly regulate human conduct in an area with immense potential for both benefit and harm. Any ‘reasonable’ person would see the need for proactive legislation to clarify the operational complexities in AI-created IP, contribute to AI development, and usher in general intelligence. 

This article can be cited as:

Bluebook, 20th edn.: “Rohit Hebbale, Assessing Complexities in Intellectual Property Creation by Artificial Intelligence, Metacept – InfoTech and IPR, accessible at https://metacept.com/assessing-complexities-in-intellectual-property-creation-by-artificial-intelligence/.”

References:


[1]Naveen Joshi, How Far Are We From Achieving Artificial General Intelligence? Forbes (2019), https://www.forbes.com/sites/cognitiveworld/2019/06/10/how-far-are-we-from-achieving-artificial-general-intelligence/ (last visited Aug 1, 2020).

[2]E-3 Magazine, Strong vs. Weak Artificial Intelligence, e3zine (2020), https://e3zine.com/strong-artificial-intelligence/ (last visited Aug 1, 2020).

[3]Monika Shailesh, Artificial Intelligence: Facets & Its Tussle With IPR – Technology – India, Mondaq (2018), https://www.mondaq.com/india/new-technology/740638/artificial-intelligence-facets-its-tussle-with-ipr (last visited Aug 1, 2020).

[4]Daniel Lloyd, How do intellectual property rights apply to AI? Lexology (2019), https://www.lexology.com/library/detail.aspx?g=44664e18-6ff3-4a5b-9d2f-109afa811f29 (last visited Aug 1, 2020).

[5]Committee on Legal Affairs, PR INL – European Parliament Europarl.Europa (2016), https://www.europarl.europa.eu/doceo/document/JURI-PR-582443_EN.pdf?redirect (last visited Aug 1, 2020).

[6]Mahendra Kumar Sankar, Copyright Law in India, Legalserviceindia (2017), http://www.legalserviceindia.com/article/l195-Copyright-Law-in-India.html (last visited Aug 1, 2020).

[7] 111 U.S. 53 (1884)

[8] 191 F.2d 99 (2d Cir. 1951)

[9] [1927] 1 Ch 167

[10] 188 U.S. 239 (1903)

[11]Swapnil Tripathi & Chandni Ghatak, (PDF) Artificial Intelligence and Intellectual Property ResearchGate (2017), https://www.researchgate.net/publication/323557478_Artificial_Intelligence_and_Intellectual_Property (last visited Aug 5, 2020).

[12]James Vincent, OpenAI has published the text-generating AI it said was too dangerous to share The Verge (2019), https://www.theverge.com/2019/11/7/20953040/openai-text-generation-ai-gpt-2-full-model-release-1-5b-parameters (last visited Aug 5, 2020).

[13]Dr. Kalyan C Kankanala, Patentability Requirements in India BananaIP Counsels (2019), https://www.bananaip.com/ip-news-center/patentability-requirements-in-india/ (last visited Aug 5, 2020).

[14]Angela Chen, Can an AI be an inventor? Not yet. MIT Technology Review (2020), https://www.technologyreview.com/2020/01/08/102298/ai-inventor-patent-dabus-intellectual-property-uk-european-patent-office-law/ (last visited Aug 5, 2020).

[15]Ryan Abbott & Stephen Thaler, Patent Applications The Artificial Inventor Project (2020), http://artificialinventor.com/patent-applications/ (last visited Aug 5, 2020).

[16]Vilas Shetty, Patent of Addition: Strategy for incremental innovation S. Majumdar & Co. (2019), https://www.majumdarip.com/blog_post/patent-of-addition-a-strategy-for-incremental-innovation/ (last visited Aug 5, 2020).

[17]Thomas Burri, The EU is right to refuse legal personality for Artificial Intelligence www.euractiv.com (2018), https://www.euractiv.com/section/digital/opinion/the-eu-is-right-to-refuse-legal-personality-for-artificial-intelligence/ (last visited Aug 5, 2020).

[18]Emma Woollacott, Who owns AI’s ideas? Disputing intellectual property rights Raconteur (2020), https://www.raconteur.net/risk-management/ai-ip-rights (last visited Aug 5, 2020).

[19]Manish Ranjan, Meaning and Kind of Persons Legal Services India (2013), http://www.legalservicesindia.com/article/2316/Meaning-and-Kind-of-Person.html (last visited Aug 5, 2020).

[20]Office of the Controller General of Patents, Designs and Trademarks, Guidelines for Examination of Computer Related Inventions (CRIs) ipindia.nic.in (2017), http://www.ipindia.nic.in/writereaddata/Portal/IPOGuidelinesManuals/1_86_1_Revised__Guidelines_for_Examination_of_Computer-related_Inventions_CRI__.pdf (last visited Aug 6, 2020).

[21]Legal Affairs Committee, EUROPEAN CIVIL LAW RULES IN ROBOTICS Europarl.europa.eu (2016), http://www.europarl.europa.eu/RegData/etudes/STUD/2016/571379/IPOL_STU(2016)571379_EN.pdf (last visited Aug 7, 2020).

[22]Open Letter to the European Commission ‘Artificial Intelligence and Robotics’ , Available at https://g8fip1kplyr33r3krz5b97d1-wpengine.netdna-ssl.com/wp  content/uploads/2018/04/RoboticsOpenLetter.pdf

[23]Shawn Bayern, The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems SSRN (2016), Page 102, https://poseidon01.ssrn.com/delivery.php?ID=639031071067109079087127029074005103097015064044037086104072016113002091101009027027063025012043117004047031070013073085011127123037031086044067113068120111125119105036051031098088101126019085003001076102090013071071097092107087088113105031030012083005 (last visited Aug 7, 2020).

[24] [2010] FCA 577

[25] 499 U.S. 340 (1991)

[26] Case C-5/08,  [19 July 2009] ECR I-6569

[27] 36 F.2d 292,293 (1929)

[28]Liza  Vertinsky  &  Todd  M.  Rice,  Thinking  about  Thinking  Machines: Implications for  Machine  Inventors For  Patent  Law, B. U. J SCI. & TECH  L. 82 (2002),  Page 12, http://www.bu.edu/law/journals-archive/  scitech/  volume82/ vertinsky&rice.pdf, (last visited Aug 5, 2020).

[29]Yohan Liyanage & Kathy Berry, INSIGHT: Intellectual Property Challenges During an AI Boom, Bloomberg Law News (2019), https://news.bloomberglaw.com/ip-law/insight-intellectual-property-challenges-during-an-ai-boom (last visited Aug 5, 2020).

[30]Dan Robitzski, Can AI own a patent? weforum (2019), https://www.weforum.org/agenda/2019/08/can-ai-own-a-patent/ (last visited Aug 7, 2020).

[31] Ladbroke vs. William Hill [1964] 1 All ER 465

[32]Emma Woollacott, Who owns AI’s ideas? Disputing intellectual property rights Raconteur (2020), https://www.raconteur.net/risk-management/ai-ip-rights (last visited Aug 5, 2020).

[33]John Kingston, Artificial Intelligence and Legal Liability- ResearchGate (2016), https://www.researchgate.net/profile/John_Kingston2/publication/309695295_Artificial_Intelligence_and_Legal_Liability/links/5a39397caca27208acc79e70/Artificial-Intelligence-and-Legal-Liability.pdf (last visited Aug 6, 2020).

[34]John Villasenor, Products liability law as a way to address AI harms Brookings (2019), https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/ (last visited Aug 6, 2020).

[35]Leigh Smith, AI and IP: copyright infringement by AI-Systems (UK law) Talking Tech (2017), https://talkingtech.cliffordchance.com/en/ip/copyright/ai-and-ip–copyright-infringement-by-ai-systems–uk-law-.html (last visited Aug 6, 2020).

[36]John K.C Kingston, Artificial Intelligence and Legal Liability, Researchgate (2016), Page 2-5, https://www.researchgate.net/profile/John_Kingston2/publication/309695295_Artificial_Intelligence_and_Legal_Liability/links/5a39397caca27208acc79e70/Artificial-Intelligence-and-Legal-Liability.pdf (last visited Aug 7, 2020).

[37]Prof. Gabriel Hallevy, The Criminal Liability of Artificial Intelligence Entities SSRN (2010), Page 39-42, https://poseidon01.ssrn.com/delivery.php?ID=606070026007126075065078081092007086026071069006028088099020122064088025105068001022096019020106111061101117115031123110001094015046040047000111005095081110016125060034044026012095088096003073117076114070067089106004102087094018010118101120030125117 (last visited Aug 7, 2020).

[38]Swapnil Tripathi & Chandni Ghatak, Artificial Intelligence and Intellectual Property, ResearchGate (2017), Page 96, https://www.researchgate.net/publication/323557478_Artificial_Intelligence_and_Intellectual_Property (last visited Aug 7, 2020).

[39]Emma Woollacott, Who owns AI’s ideas? Disputing intellectual property rights Raconteur (2020), https://www.raconteur.net/risk-management/ai-ip-rights (last visited Aug 6, 2020).

[40]Committee on Legal Affairs, PR INL – European Parliament Europarl.europa.eu (2016), Page 14-16, https://www.europarl.europa.eu/doceo/document/JURI-PR-582443_EN.pdf?redirect (last visited Aug 7, 2020).

[41]UK Copyright Service, Copyleft- Fact Sheet P-20 Copyrightservice.co.uk (2019), https://copyrightservice.co.uk/copyright/p20_copyleft (last visited Aug 6, 2020).

Tags

Related Articles

Leave a Reply

Close