The Imperative to Democratize Artificial Intelligence
MIT Technology Review recently published an article titled, “An AI Ophthalmologist Shows How Machine Learning May Transform Medicine.”
In it, it describes how Google researchers at their DeepMind subsidiary used artificial intelligence (AI) to scan images of human eyes to detect a common form of blindness as well as, or better than trained experts can.
They achieved this by using the same machine learning techniques Google and other tech giants including Facebook use to analyze images that show up on their web platforms. Instead of creating complex programs to handle every conceivable detail in an image, researchers instead teach machines how to learn on their own when exposed to large volumes of pre-tagged examples.
In the MIT Technology Review article, DeepMind’s algorithm studied some 128,000 retinal images that were already classified by ophthalmologists.
The breakthrough is only the latest in a long line of advances in AI. AI machine learning is already being widely used in real-world applications, including sifting through the United Kingdom’s National Health Service’s records, automatically tagging – and flagging – images, videos, and voice across vast social networks, improving efficiency at utility plants by spotting trends and automatically adjusting power consumption, inputs, and outputs, as well as developing protocols for both pharmaceutical production and genetic engineering.
DeepMind’s research into analyzing medical imagery is already set to be integrated into its UK NHS collaboration, according to the Guardian in an article titled, “Google DeepMind pairs with NHS to use machine learning to fight blindness,” which reports:
Google DeepMind has announced its second collaboration with the NHS, working with Moorfields Eye Hospital in east London to build a machine learning system which will eventually be able to recognise sight-threatening conditions from just a digital scan of the eye.
The collaboration is the second between the NHS and DeepMind, which is the artificial intelligence research arm of Google, but Deepmind’s co-founder, Mustafa Suleyman, says this is the first time the company is embarking purely on medical research. An earlier, ongoing, collaboration, with the Royal Free hospital in north London, is focused on direct patient care, using a smartphone app called Streams to monitor kidney function of patients.
In essence, those who control AI technology have access to algorithms that can perform specific tasks better than any trained human can. This confers on those who control this technology an immense advantage and creates disparity those without AI technology have no means of competing against.
Corporations and nations wielding this power, as the number of applications expands, represent an alarming, emerging disparity that may lead to the same sort of abuses and exploitation other forms of technological disparity throughout history have wrought.
Democratizing AI
Effort into developing AI applications involves big-data. Training machines rather than merely programming them, means exposing them to large amounts of information they can sift through and train themselves with. In order to do this, not only do large amounts of information need to be collected, they need to be tagged or otherwise classified so machines have a baseline to improve against.
The development of these large data sets, as well as developing algorithms to exploit them, requires (at the moment) large numbers of participants outside of corporations like Google and their subsidiaries like DeepMind.
Toward that end, opensource software libraries for machine learning, like Google’s TensorFloware available online for free. GitHub, an online development repository, offers access to a wide range of other available machine learning libraries coders and programmers can use.
The physical hardware currently being used to build deep learning machines include GPUs (Graphics Processing Units) similar to those found in high-end gaming computers. Instructions are online on how to build deep learning machines, including information provided by companies like NVIDIA which make commercially available GPUs.
While it remains to be seen what individual or independent groups of developers can achieve in terms of democratizing this technology, it may be in the best interests of nation-states to begin developing their own AI programs rather than wait for Google, Facebook, and even China’s Baidu to “share” this technology with them.
It may also be in their best interests to examine the merits of promoting the democratization of this technology. Where a lack of resources to acquire high-level researchers at an institutional level exists, democratizing and thus tapping a larger pool of talent to even the odds in the AI race while also raising public literacy regarding this increasingly pivotal technology may be an alternative option.
Research into AI cannot be “banned” and breakthroughs cannot be “un-invented.” With the tools already widely (and in some cases, freely) available to advance AI, attempts to put this civilization-changing technology “back in the box” will only waste time and resources. The only way to counter the harmful application of AI is by possessing an equal or greater capacity to utilize the technology and increase the number of people both educated in how it works, and capable of applying it in reaction to harmful exploitation of it.
Just like information technology, nuclear weapons, or even firearms tilted the global balance of power in favor of those who initially wielded them before more acquired and exploited these technologies, AI too poses a threat unless and until it is more widely adopted and democratized.
With the power to focus on and master any task at superhuman levels, we ignore the challenge to balance this emerging power at our own peril.
LocalOrg seeks to explore local solutions to global problems by empowering people locally with education and technology to not only survive, but to thrive.