By Ramneek Pahwa
The COVID-19 pandemic has demanded that nations come together to come up with an immediate action. These unprecedented actions are supported by the development of artificial intelligence models in the healthcare industry. As time is of the essence, readily available equipment such as, infrared temperature monitors, Bluetooth technology to scan registered codes assigned to each citizen that prompts a warning on the mobile device, facial recognition’s ability is being tested to its fullest capacity and other location-tracking bugs through third-party developers including that of the state government. Rapid measures are forcing us to ignore all ethical boundaries of privacy. Existing data storing modules created on societal biases are being fed for a more precise machine conclusion in support with public and private agencies.
According to the UN Global Pulse, “when it comes to medical imaging, an AI model may perform certain tasks, such as reading CT lung scans, faster and, given the right data to train on, even more accurately than a medical professional.” The origin of the virus has been traced back to Wuhan Province, China. One of the leading countries in artificial intelligence systems that allowed the state to track and halt the spread based on citizen movement. The aforementioned technology has been used in e-commerce, telecom, and other technology-dependent industries that allow maneuvering of the actions and thoughts of individuals for commercial purposes. All of this raises important ethical issues.
According to the 'Privacy Issue', the virus is being labeled a “terror threat” by Israel in order to bypass the ethical boundaries and employ counterterrorism surveillance to track the threat. India is stamping its citizens upon entering the country and further using social media channels among other smartphone applications to track activity and confirm social distancing; China is using its pre-existing methods of facial recognition with infrared technology to monitor the temperature in large crowds, using location-tracking bugs, drones and CCTV's to identify quarantine breaches to determine the spread; South-Korea and Taiwan have also employed similar measures.
The United Kingdom, the United States of America and India grant constitutional rights to individual privacy as a fundamental right to guard against “intrusive monitoring.” The danger of the virus is not limited to the health but also to the post-pandemic cost of losing our democratic notions. This kind of monitoring opens the possibility of the state manipulating society through technology in the future.
Deep learning is one of the parts of machine learning that is trained with a “series of labeled images into algorithms that pick out features within them and learn how to classify similar images,” according to an article published in the Guardian. The AI system learns the diseases through various scans and existing diagnoses. It can in the future nullify doctor-patient relationships and save the big hospitals millions a year allowing doctors to respond to the more complex and serious issues, suggests the article.
Artificial Intelligence systems have been highlighted as a core issue of the current age in many studies and articles. AI technology is solely responsible for calculating the data that is collected and stored by public and private corporations of Silicon Valley: Amazon, Google, Facebook, Microsoft, and other small-scale organizations that support the larger efforts of these organizations that assume their role as public service operating on “private profit.” Even though they are run in the name of public service, their role must be contained and moderated to avoid exploitation of the said data for marketing purposes that require making profiles of individuals. It is therefore useful to see the scale of the data collected in the course of this pandemic.
The Chinese model is being thoroughly examined by other researches to advance the process at a faster pace as they have become proficient at using such technology. An NYT article talks about facial recognition software being used in China's western region to track a Muslim minority group: Uighurs, questioning the capability of automated racism in future technologies. The focus of collecting data such as age, gender, ethnicity, characteristics summarised by your likes and dislikes on social media platforms, cognitive processes highlighted by your amazon orders will soon be an issue of the past. Cambridge Analytica was such private organizations that reduced transparency by changing the narrative to have targeted advertising to an individual rather than at a community level. This kind of precision has been allowed by software that is provided by I.B.M. which provides “unconscious biases” to facial recognition cameras to sort the data on certain attributes.
A similar question, “How much is enough?”, was ignored by many that are now facing its consequences. With Amazon Alexa and Google home devices intervening into confined boundaries of a home and health applications that take user consent to store information about the functioning of your body to monitor and predict changes, the interventionist future might be much closer than one thinks.
Two weeks ago, India, much like Singapore, released a mobile application(Aarogya Setu) to track the mobility of the virus. But the terms of Aarogya Setu clearly states that “such personal information may also be shared with such other necessary and relevant persons as may be required to carry out necessary medical and administrative interventions.” Since then, the application has had over 10 million downloads, but not enough to actually make it a tracker of the virus. To use it Indians would need a smart phone and currently they are running out of food.