<@U054D133D09> Here’s a quick and dirty summary th...
# 05-ai-news
@Don Alvarez Here’s a quick and dirty summary that includes some of the areas they consider “”high risk” which results in more burdensome legal obligations: https://www.foley.com/en/insights/publications/2023/06/eu-paves-way-us-regulation-ai There are also “unacceptable risk” categories - stuff that aims to manipulate human behavior, CCP style social scoring databases, stuff relating to facial recognition, infer emotions in crime enforcement etc. I’m going through the reg language the next couple days (building a legal AI company). If anyone has any specific questions, feel free to let me know here and I’ll do my best to respond once I’ve gone through it. I was a corporate attorney in a prior life. Obligatory caveat: I’m not your attorney and I’m not seeking to represent you, this isn’t legal advice, etc.
Thanks @James Park setting aside some time to read this carefully
👍 1
Thanks, @James Park, for sharing the summary by Foley. The summary inspired me to write a post on LinkedIn about one of the 8 high-risk use cases. I used to work in econ/litigation consulting and we analyzed lots of class action cases in recruiting and on-the-job placements. https://www.linkedin.com/feed/update/urn:li:activity:7076725672282374144/