Cognitive Technologies – an Emergent Force in Healthcare
What was once science fiction is quickly becoming healthcare reality. Artificial intelligence (AI) and machine learning, or “cognitive technologies”, are seeing serious venture investment, spawning countless startups and creating hopes for a brighter future.
In healthcare, successful AI and machine learning software and devices built by developers must protect the privacy and security of regulated health data while harnessing the power of cognitive tech. AI has allowed the industry to aggregate multiple datasets from different healthcare scenarios, allowing us to make faster predictive decisions with more assured results. The potential for AI to drive positive change in healthcare is unprecedented.
Cognitive Tech Is Already Seen Across the Healthcare Spectrum
While still emergent, cognitive technologies are being tested and deployed widely. Machine learning and artificial intelligence are already being applied in at least the following ways:
- De-identification of health data
- Precision medicine
- Wellness programs
- Diagnostic assistance
- Image analysis
- Patient engagement
- Robotics and robotic surgery
- Revenue cycle management
- Health insurance modeling and intervention
Cognitive technologies have already helped accelerate many kinds of tasks, leading to faster decision-making and improved outcomes, so the importance of it cannot be ignored.
HIPAA Requires Health Data Be Protected, Regardless of Technology
As with any new technology, users must consider the benefits and risks of using cognitive tech. HIPAA requires developers to address and manage these risks if they could impact the security or privacy of health data.
As the primary US law protecting the privacy and security of health data, HIPAA is the regulatory framework that all healthcare technology operates under in the USA. Fortunately, HIPAA is technology agnostic, meaning the regulations accommodate any technology, existing and emergent, as long as HIPAA’s compliance requirements are met.
In addition to overall compliance, HIPAA presents three duties for all compliant entities:
- Ensure the confidentiality, integrity, and availability of Protected Health Information (PHI) created, received, maintained, or transmitted.
- Protect against any reasonably anticipated threats and hazards to the security or integrity of PHI.
- Protect against reasonably anticipated uses or disclosures of PHI not permitted by the Privacy Rule.
Cognitive Technology Raises New Privacy and Security Risks for Developers
Among the powers of cognitive tech is its ability to identify health indicators from data not traditionally subject to clinical review, e.g. social media, driving history, exercise patterns or shopping habits. HIPAA (along with the imminent General Data Protection Regulation (GDPR)) requires a policy known as “minimum necessary.” Under this data minimization imperative, developers should determine what data is required to accomplish specific objectives and business cases, whether consent is required, and which data may be discretionary or unnecessary.
Data residency issues are also implicated under HIPAA and GDPR because of the distributed nature of cognitive technology. “AI-as-a-Service” (AIaaS) products offered today by IBM, Google, Amazon and others let even small firms and startups buy precisely the cognitive power needed, at just the right times, and link it to software and devices via convenient APIs.
The data residency risk here is that such AIaaS offerings are generally “black boxes”, some with resources distributed over multiple data centers in different areas and jurisdictions. For developers, not knowing where your user’s PHI is located, or how it’s flowing is a HIPAA failure.
How Can Developers Address These New Risks?
Since risks associated with cognitive tech are in their infancy, developers should take at least three steps to begin addressing them:
- Incorporate Cognitive Technology into Risk Assessments – HIPAA requires an “accurate and thorough” assessment of risks to PHI by entities using or processing it. If your healthcare software or device will employ any sort of cognitive tech, include that in your risk analysis. There may be many unknowns, and quantifying unfamiliar risk is a challenge, but it must be addressed and evaluated. Even anonymized health data can be hacked and potentially re-identified, posing a degree of risk that must be considered.
- Monitor Regulatory and Legal Developments – The legal and regulatory landscape around cognitive tech is changing, and developers should find efficient ways to monitor new developments. Whether via newsletters, conferences, blogs, or from other sources, staying on top of this shifting regulatory and legal landscape is a must for affected developers.
- Conduct analyses in HIPAA-protected Application Environments – Avoid the risks in relying on anonymization by running analytics in protected compliant environments where access to the identifiable data is very tightly controlled. The results, once verified, can then be exposed to the public.
Though they’re still in their infancy, cognitive resources like machine learning and artificial intelligence are already turning science fiction into healthcare fact. Developers building cognitive tech into software, services or devices must remain vigilant about new risks to health data, while meeting their regulatory and compliance obligations.
Was this article helpful? Subscribe below to learn more about MedStack and get tips delivered straight to your inbox.