The word of the patent system is final. However, European legislation was last written in the late 1970s. At that time there was understandably no consideration, or indeed imagination, how computer science would evolve and ultimately change the world as we know it. How could lawmakers over four decades ago have foreseen that a computer would one day become a tool to help invent? Or indeed invent on its own?
But the question remains: can a machine truly invent without human aid? This type of artificial intelligence, commonly seen in sci-fi films, is most notable from Will Smith movie I, Robot. The best guess is that we’re still at least 50 years away from achieving such a technological feat, while others claim it will never happen, to which Professor Ryan Abbott may beg to differ.
The relatively recent DABUS AI project made this subject bubble to the surface. Led by Professor Abbott, the campaign saw a machine design a container. What came next is where things got interesting – they named DABUS, an AI platform, as the inventor on the patent filing. This was deliberately provocative, and it worked because they got everyone’s attention and stimulated debate.
The problem was that every jurisdiction effectively dismissed the filing on the basis that only humans can be named inventors, so ultimately it didn’t get them very far in the legal debate. The real element was whether or not a computer can actually be considered the inventor though and, if so, whether that diminishes the perceived quality of the invention.
The case of an AI-implemented invention presents a simpler paradigm. If AI is used to manage jet engine components, that’s just an AI-implemented process. Inventorship in this instance would be awarded to the person that wrote the code – and that is the invention. Another example of an AI-implemented innovation would be use of machine learning to find a match for a date and the individual who programmed it is the inventor.
Central to this discussion of inventorship and AI is when the invention is made using an AI machine or platform – and in health tech it’s particularly significant. Drug development traditionally involves an individual or team considering various compounds that may treat a condition, which can involve hundreds or thousands of combinations and lab tests. Once an appropriate drug candidate is identified, a patent is then filed with some supporting data.
A properly trained machine can complete this painstaking process in a fraction of the time, and potentially reduce the number of drug candidates from hundreds or thousands down to tens. An approved drug ready for marketing will require many other clinical tests – but the case here is that the invention has been made, at least in part, using the AI platform. Is there an argument that the AI platform is a co-inventor, or put another way, without the AI contribution there is no human inventor we can point to?
This is not a panacea though. The future of innovation around drug development or otherwise in health tech will not solely be left down to machines. AI has to be trained and what it churns out is often nonsensical and must be sorted through by experts who can see the potential. In that respect, it’s a bit like sketching in a dot-to-dot book and connecting the dots to get the final product, in that the computer output doesn’t necessarily give the complete picture and work is still required.
With this conundrum to unravel, you can refine beliefs into three thought processes where AI and inventorship is concerned:
(1) Some people believe the AI platform is the inventor.
(2) Some people believe the AI platform, plus the human intervention – either in the training of it or interpretation of the results – are the joint inventors.
(3) Some people believe the AI platform is just a tool or nothing more – they as the human programmed it and looked at the results, so they’re the inventor.
If a student is using a calculator at school to complete a sum, the question there is: did that individual do the sum or did the calculator solve it? One could argue it doesn’t matter because the answer has been obtained. So why is using a computer to help sort through the possibilities of drug development, of which there may be many thousands, any different from trying every one of those out in a lab? Because patents are supposed to be for rewarding human innovation, not machine innovation.
Frankly, as a health tech startup it’s a very alluring thing that might attract investment to declare it has got an AI platform that can do all the inventing. But legally that approach could line up for a battle later on.
For a company to be found in court with a pharma patent lawsuit based on a background of web pages saying: ‘Our AI did all the work,’ it isn’t a stretch to predict the other side will no doubt argue they shouldn’t have been given the patent because the computer was responsible for the outcome or that by using AI the invention is obvious.
Inventorship is a real issue if you’re using AI, and the law needs to either just accept that the people who programmed it and selected from the results are the inventors (which is something that is accepted here in the UK for computer-generated artistic works) – or lawmakers need to pursue the concept of a third category of person for making a valid patent filing: an electronic person (computer). This would build on the existing natural person (a human) and legal person (company) and solve one of the weaknesses of the DABUS case, which was: how did the company get ownership from the machine it operated if the machine as an entity can’t own property itself?
For now at least, the approach should be to not name AI an inventor when you file a patent application and adopt the position that it is just a sophisticated tool. The key argument is that AI is an aid to help invent, but not the inventor. It’s only at a time when we get to the threshold point where, without the machine, there is no inventor that we need to truly revisit the law.