ChatGPT has driven the White House to generate a proposal on AI

#ChatPGT has driven the White House to release a proposal regarding AI and the rights of humans.
https://www.whitehouse.gov/ostp/ai-bill-of-rights/

I give each principle, its brief description, and some thoughts for each.

The "AI Bill of Rights" has five principles:

1️⃣ Safe and Effective Systems: "You should be protected from unsafe or ineffective systems."
🤔 This is pretty self-explanatory, but basically it says that AI systems should be assessed for risk with appropriate remediations, tested prior to deployment, and continued assessment throughout the time the AI is in service. This should be standard practice for all software, not just AI, but if you are aware of how many start-ups operate, you would understand the necessity of repeating what should be common sense.

2️⃣ Algorithmic Discrimination Protections: "You should not face discrimination by algorithms and systems should be used and designed in an equitable way."
🤔 Unfortunately, this isn't a new or upcoming problem. This issue has been around for some time, and the potential of new AIs could make this even worse. An example from 2019: https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/


3️⃣ Data Privacy: "You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used."
🤔 Bit of a breath of fresh air seeing Data Privacy coming from a US Government entity. Compared to a number of other countries and regions. The EU, as an example, US Data Privacy protections are wholly inadequate.
This touches on consent, collection, surveillance, deletion, etc

4️⃣ Notice and Explanation: "You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you."
🤔 I really like this one. Unfortunately, compliance will problem end up being a single line of text buried deep in a Terms of Services. A GPT-based Open Source Software Idea is a plug-in that gives you concise, plain English summaries of a TOS and the hidden gotchas. The Electronic Frontier Foundation (EFF) should build this.

5️⃣ Human Alternatives, Consideration, and Fallback: "You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter."
🤔 Wow, I love this one too! This is already needed for the inability to talk to a human in many companies. If I have a poor customer service experience with a company because I can't chat with a human, I stop purchasing from that company. Unfortunately, that list has become long in recent years.
It does say, "where appropriate" so I can foresee many companies getting seriously creative on their definition of "appropriateness."

It will be interesting if this actually acquires teeth...at the moment it is just a suggestion.

Subscribe to Tinsel AI

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe