@Don Alvarez Here’s a quick and dirty summary that includes some of the areas they consider “”high risk” which results in more burdensome legal obligations:
https://www.foley.com/en/insights/publications/2023/06/eu-paves-way-us-regulation-ai There are also “unacceptable risk” categories - stuff that aims to manipulate human behavior, CCP style social scoring databases, stuff relating to facial recognition, infer emotions in crime enforcement etc.
I’m going through the reg language the next couple days (building a legal AI company). If anyone has any specific questions, feel free to let me know here and I’ll do my best to respond once I’ve gone through it. I was a corporate attorney in a prior life. Obligatory caveat: I’m not your attorney and I’m not seeking to represent you, this isn’t legal advice, etc.