- July 28, 2019
- Posted by: Amio Ikihele
- Category: Blog
I came across this article in the US over the weekend which suggested Fitbits and other wearables were inaccurately tracking heart rates in people of colour.
This problem isn’t isolated to wearable technologies though. A few months ago, I saw the following video, where a person of colour was not able to use the soap dispenser which was triggered by a sensor. The dispenser didn’t recognise the darker skin tone, but easily dispensed soap to the individual with a lighter skin tone as well as a white paper towel that was placed underneath. Another article found gender and racial bias in Amazon’s facial recognition technology (again). These programmes have obvious ‘limitations’ which can easily be addressed – but why are these defects present in the first place?
In 2017, I attended a Digital Medicine (DIGIMED17) conference at Scripps Research Translational Institute in La Jolla, San Diego.
Scientists at the Scripps Research Translational Institute aim to make individualized medicine a reality for everyone. They use the tools of genomics and digital medicine combined with cutting-edge informatics techniques to better understand each individual and ultimately render more effective healthcare.
One presentation from a Fitbit representative shared a few facts about their device which you can read on the right.
Surprisingly, Fitbits have the largest health and fitness databases (or Big Data).
The potential application of these devices are endless, which will no doubt become the norm in the future.
Fitbits and wearables have revolutionised the way heart rate, sleep and physical activity are monitored.
It is considered the wearable choice for clinical research and has been used in over 440+ research studies with credible universities, with 89% published work, 83% of clinical trials and 95% NIH funded research.
Finding new ways to monitor and improve health behaviours is important. My only concern with this much data and now knowing there are technological biases favouring fairer skinned individuals, is how do we ensure equal health outcomes or ‘data’ is accurately reflected if consumers are suggesting its device has limitations with darker skin toned individuals? When health insurance companies are monitoring individual’s health using Fitbit data (which some do as part of their health plan in the USA), do health insurance premiums increase or are financial incentives withheld as a result of this defect? I’m not familiar with how health insurance systems work in the US, but my fear is that the technology designed to monitor health will be used against individuals not because of their lifestyle, but purely due to a defect in a technology that has not considered people of colour. This could easily affect Maori, Pacific and other communities with darker skin tones in New Zealand.
What does this mean for digital health designers in the future?
Any new technology that is designed will have its weaknesses. One way to overcome this is for digital health designers to be inclusive of all individuals from different demographics, backgrounds and as seen here ‘skin tone’. In fact, this approach should be the norm for any health programme that is being designed whether it involves technology or not. Designers need to ensure diverse communities are involved at the very beginning of the design process, NOT at the end when the product or device is ready to be tested or piloted. This ensures digital health technologies are tailored for specific groups earlier, not later.
Emerging technologies can support the way people manage their own health, particularly addressing how long-term health conditions are managed as it continues to increase globally at an alarming rate (e.g. high blood pressure and diabetes). Our challenge is to ensure ‘new innovations’ do not exacerbate health inequities and further disadvantage communities who already experience barriers accessing health services. To do this, we must continue to view ‘innovative solutions’ through an equity lens.