This site uses cookies to improve your experience. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country, we will assume you are from the United States. Select your Cookie Settings or view our Privacy Policy and Terms of Use.
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Used for the proper function of the website
Used for monitoring website traffic and interactions
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Strictly Necessary: Used for the proper function of the website
Performance/Analytics: Used for monitoring website traffic and interactions
She previously served as a Research Fellow in Medicine, Artificial Intelligence, and Law at the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School for the Project on Precision Medicine, Artificial Intelligence, and the Law ( PMAIL ).
On top of that a firm will -in the near future- need to comply with all the specific requirements for ‘high risk’ AI technology as stipulated in the Proposal for a Regulatory Framework for Artificial Intelligence (EU AI Act), and navigate its way through the future European HealthData Space. Sectoral US Laws In the U.S.,
Namely, legal protections concerning personal healthdata may not apply when the entity offering the service is decidedly not a “provider.” To illustrate the issue, consider that the Privacy Rule of the Health Insurance Portability and Accountability Act (HIPAA) expressly covers genetic information as a form of healthdata.
The EO lists biotechnology on its list of examples of “pressing security risks,” and the Secretary of Commerce is charged with enforcing detailed reporting requirements for AI use (with guidance from the National Institute of Standards and Technology) in developing biological outputs that could create security risks.
The EU’s AI Act sets rigorous standards for high-risk AI systems, emphasizing robust data quality, detailed record-keeping, and clear user information to ensure transparency and accountability. These reforms streamline liability claims and establish clearer accountability for AI-generated issues.
Those good intentions notwithstanding, the current healthdata landscape is dramatically different from when the organizational author of the plan, the Office of the National Coordinator for Health IT, formed two decades ago.
We organize all of the trending information in your field so you don't have to. Join 19,000+ users and stay up to date on the latest articles your peers are reading.
You know about us, now we want to get to know you!
Let's personalize your content
Let's get even more personalized
We recognize your account from another site in our network, please click 'Send Email' below to continue with verifying your account and setting a password.
Let's personalize your content