One of the saddest aspects of the ongoing pandemic is that COVID-19 has a disproportionate effect on color and low-income Americans due to structural factors rooted in history and long-standing public prejudices.
Most at risk during this pandemic 24 million workers with the lowest incomes; people who have less job security and cannot work from home. In fact, only 9.2% of the bottom 25% be able to work from home. Compare this to 61.5% of the top 25%, and the inequality is staggering. In addition, people at these jobs are generally not financially secure in order to avoid interacting with the public by storing food supplies and household items, buying products online or avoiding public transportation. They cannot be insulated. They should go far beyond other groups, increasing the risk of infection.
Historically destitute are also the hardest hit by the economic impact of the pandemic. They are overrepresented in worst-case industries. The problems were terrible until COVID-19, with typical black and latin households having a net worth of everything $ 17,100 and $ 20,765, respectively compared to $ 171,000 held by a typical white household. A prolonged crisis in healthcare and economics will only exacerbate these already extreme differences.
AI as a beacon of hope
A rare, encouraging aspect of the ongoing response to the pandemic is the use of advanced technologies, especially artificial intelligence, to solve any task, from supply chains to early vaccine research,
The human potential + AI far exceeds the potential of people working alone, but there are huge risks that require careful consideration. AI requires a huge amount of data, but this data contains the shortcomings of society and injustice, which primarily led to disproportionate consequences for health and finances.
In short, we cannot use the tool until we find out that it works, and we do not understand what unforeseen consequences can lead to. Some health groups have rushed to modify existing AI models to help track patients and manage the supply of beds, fans, and other equipment in their hospitals. Researchers have been trying to develop artificial intelligence models from scratch. focus on the unique effects of COVID-19, but many of these tools have struggled with issues of bias and accuracy. Balancing the instinct of “help now” and the risk of “unforeseen consequences” against the backdrop of high levels of the COVID-19 pandemic is why the responsible use of AI is now more important than ever.
4 ways to purposefully and responsibly use AI to combat COVID-19
1. Avoid delegating to algorithms that run critical systems
Think of an artificial intelligence system designed to distribute fans and medical equipment in hospitals to maximize survival. Underprivileged populations have higher co-morbidities and, therefore, are less likely to receive supplies if the system is not properly designed. If these pre-existing prejudices are not taken into account when designing an artificial intelligence system, then well-meaning efforts can lead to deliveries coming from particularly vulnerable communities.
Artificial intelligence is also used to improve supply chains in all sectors. Joint Center for Artificial Intelligence is an AI prototype that can track data on fans, PPE, medical supplies, and food. When the goal is to anticipate the purchase of panic and ensure that healthcare professionals have access to the necessary equipment, this is the responsible use of AI.
As these examples show, we can quickly come to a problematic use case when decision-making authority is delegated to the algorithm. The best and most responsible use of AI is maximizing efficiency to provide the necessary supplies to those who really need it. Previous failures in artificial intelligence show the need for healthy skepticism when delegating authority to make potentially vital decisions to an algorithm.
2. Beware of disproportionate impacts and isolation of specific communities.
Think of an AI system that uses mobility data to locate localized communities that violate home orders and direct the police for extra coercion. The disadvantaged segments of the population do not have the economic means to store food and other supplies or deliver orders, which forces them to go out. As we mentioned earlier, overrepresentation in the front-line sectors means more frequent leaving home. In addition, individuals and families experiencing homelessness may be victims of violations of the compulsory residence regime. New York City Police Enforces Household Directives disproportionately targeted residents of blacks and latins, That’s where the responsible AI intervenes. AI systems should not be designed to punish these populations with police forces, but to help identify root causes and channel additional food and resources. This is not a panacea, but will avoid exacerbating existing problems.
Israel has already demonstrated that this model works. Mid March passed an emergency law allowing you to use mobile data to identify infected, as well as those with whom they contacted. Maccabi Health Services uses AI to identify its most at-risk customers and prioritize testingThis is a fantastic example of adopting a previously responsible and successful AI by adapting an existing system that has been built and trained to identify people most at risk of influenza using millions of records over 27 years.
3. Establish a human-centered AI with privacy, design, and native control.
Think of an artificial intelligence system that uses mobile phone apps to track infections and track contacts in an effort to curb new infections. Minorities and economically disadvantaged groups have lower smartphone ownership than other groups. AI systems must consider these considerations to avoid bias. This will provide adequate protection for vulnerable groups, but will also improve the overall efficiency of the system, as these people can have high human contact in their workplaces. Ensuring proper tracking and tracing in these populations is critical.
In the US, MIT researchers are developing Private automatic contact tracking (PACT)that uses Bluetooth to track contacts while maintaining individual privacy. If you give a positive result and report this application, everyone who has been in the immediate vicinity of you for the past 14 days will receive a notification. Anonymity and confidentiality are key keys to the responsible use of AI to curb the spread of COVID-19.
In India, the government Bridge to Health App uses the phone’s Bluetooth and location data to inform users whether they were near a person with COVID-19. But, again, confidentiality and anonymity are keys to the responsible and ethical use of AI.
This is the place where the true power of man + AI shines. As these applications are deployed, it is important that they be linked to human-based tracking and tracking to address vulnerable populations. AI allows you to automate and scale tracking and tracking for most of the population; people guarantee that we help the most vulnerable.
4. Validate systems and base decisions on sanitized representative data.
Think of an AI system that helps doctors quickly make decisions about which patients to treat and how to treat them in an overloaded healthcare system. One such system, developed in Wuhan, identified biomarkers that correlate with higher survival rates to help doctors determine which patients are likely to need critical care and which patients can avoid the hospital altogether.
University of Chicago Medical Center works to upgrade existing AI system called eCART, The system will be improved so that COVID can use more than 100 variables to predict the need for intubation in eight hours. Although eight hours may not seem too long, it gives doctors the opportunity to take action before the patient’s condition worsens.
But the sampling systems and datasets on which they rely could lead to unreliable results or reinforce existing biases. If AI is trained to observe mostly white people – how there was a case with data from the International Cancer Genome Consortium – How would you be willing to delegate life-long health care decisions for a non-white patient? These issues require careful consideration and demonstrate why it is so important to check not only the systems themselves, but also the data on which they rely.
Questions We Should Ask
As companies, researchers, and governments increasingly use AI, there is a need for parallel discussions around responsible AI in order to maximize the benefits with minimal harmful consequences. We need the best AI guidelines and assessments for fairness, credibility, bias, and ethics.
There are dozens of dimensions with which we must evaluate each use case to make sure it is designed responsibly. But these four simple questions provide an excellent basis for starting a discussion between artificial intelligence developers and politicians who might consider deploying an artificial intelligence solution to combat COVID-19.
- What are the consequences if the system makes a mistake? Can we redesign the system to minimize this?
- Can we clearly explain how the AI system produced concrete results in a way that was understandable to the general public?
- What are the potential sources of bias — data, people, and design — and how can they be minimized?
- What steps can be taken to protect people’s privacy?
When to use AI solutions and tools
Each of these issues will be applied differently in specific use cases. A natural language processing system (NLP) that weeds out tens of thousands of scientific papers that could focus on finding the COVID-19 vaccine does not pose a direct threat of harm to people and performs the task faster than the army of research assistants could ever be. For example, in April at Harvard Harvard T.H. Chan School of Public Health and the Human Vaccine Project announced the Human Immunology Initiative to use artificial intelligence models to accelerate vaccine development.
This is a global effort with scientists around the world work together to accelerate drug discovery processes to defeat COVID-19 with AI. From the aforementioned work in the USA all the way to Australia, where Oracle Cloud and Vaxine Vaccine Technologies Flinders University is engaging in the development of promising vaccine candidates, and we see that AI is used for the most ethical purposes, saving lives.
Another use case is the ubiquitous problem we face during this pandemic: the spread of misinformation across the planet. Imagine you are trying to manually filter messages 1.7 billion daily Facebook users every day and scan for disinformation about COVID-19. This is an ideal project for humans + AI, where people confirm disinformation cases marked by AI.
This use case is relatively low risk, but its ultimate success depends on human control and participation. This is even more true for high-risk use cases that capture headlines in the wake of the COVID-19 pandemic. Man + AI is not just protection from a system that has been knocked off the rails, it is important for AI that produces meaningful and effective results, as shown in the previous examples.
We need to divide the use cases into three groups so that we can make decisions:
- A use case is a solution that should not be delegated to the AI system.
- Using an AI system to determine which patients receive care during a crisis. This is the case when people must ultimately make decisions because of their influence on decisions about life and death. This has already been recognized by the medical community, where ethical frameworks have been developed support these very types of solutions.
- Use case can be deployed responsiblybut it depends on the design and execution.
- Using an AI system to monitor quarantine compliance. This is the case when use cases may be acceptable depending on the design and deployment of the systems. For example, using the system to deploy police in areas to “suppress” people who do not comply with quarantine policies would be problematic. But the deployment of the police in these areas to understand What for quarantine is violated so that policy makers can better meet the needs of citizens, are legitimate – provided that the privacy of individuals is protected.
- Use case low risk and rewards far outweigh the risks.
- Filtering content on social media platforms to ensure that malicious and misleading information about COVID-19 is not widespread.
We must ask our four questions and consciously analyze the answers we have found. Then we can responsibly and confidently decide in which segment to direct the project and act responsibly and ethically.
A recent example in the USA
We recently created Lighthousenew dynamic navigation booth this helps organizations get a holistic picture of the ongoing crisis. These “beacons” are used to illuminate many aspects of the situation. For example, we recently collaborated with an American city to develop a tool that predicted disruptions in the food supply chain. One data source was based on reduced foot traffic in and around distribution centers. Without access to any personally identifiable information (PII) – and therefore personal privacy – it shows which parts of the city are likely to be in short supply, allowing leaders to proactively respond and prevent an even more serious public health crisis.
This is an easily duplicated process that other organizations can use to create and implement responsible AI to help historically disenfranchised people navigate and thrive in the COVID-19 era.
Faced with ethical dilemmas caused by crises such as COVID-19, enterprises and organizations equipped with responsible AI programs will be in a better position to offer solutions that protect the most vulnerable and historically disenfranchised groups, respecting privacy, eliminating historical bias and keeping trust. В спешке «помочь сейчас» мы не можем выбросить ответственного ИИ в окно. Фактически, в эпоху COVID-19, как никогда важно понимать непредвиденные последствия и долгосрочные последствия создаваемых нами систем искусственного интеллекта.