The Real Winners in AI Policymaking Are Lawyers

Governance to protect consumers online has become only more important as the role the internet plays in our lives has dramatically increased. Long gone are the days where you are buying everything you need to live from names and faces you know. Back then, your gut played a pretty big role in where you shopped. Today, not so much. 

You think you’re buying a couch from a reputable brand only later to find out that you purchased it from a web domain that had a zero instead of capital “o” (and you never got your couch). When they are well done, it’s nearly impossible to spot a phishing scam from the real thing. AI generated voices representing elected officials are being used to voice call people and try to sway opinions on hot topics, prompting swift calls for a ban of AI voice impersonations from the executive branch. In today’s world, most of us do not know the names and faces of the people who buy stuff online, and we may never. Granted, I have gotten to know our UPS drivers Marlon and Pablo and our amazing USPS mail carrier Mandy pretty well over the years. 

With all the unknowns and scams online, it’s only natural that our lawmakers are concerned and  rushing to squash the most harmful consumer threats happening in this new AI enabled world. This is actually the third big wave of consumer privacy policy making since the dawn of digitized & commercialized data, and you’d think we’d have learned a few things. 

Modern day consumer privacy & protection started with the mass aggregation and commercial use of data, and one of the earliest was the Fair Credit and Reporting Act in 1971. Then a few years later in 1974 came along the Privacy Act which governed the use of personal information by the government  and FERPA, or the Family Education Rights and Privacy Act which gave parents more access to manage their dependents’ educational records. 

One of the first “modern” privacy acts is VPPA, or the Video Privacy Protection Act of 1988, which protects video rental history (no joke). Then came HIPAA, or the Health Insurance Portability and Accountability Act in 1996, effective 2003, which set the national standard for electronic health records. In 1998, COPPA or the Children’s Online Privacy Protection Act was passed, effective 2000 and amended in 2013. As well, today 15 states have different consumer data privacy laws in place with varying degrees of governance. 

As more businesses have gone online, cross-border sales have also skyrocketed. So now, companies need to also think about privacy regulations in other parts of the world. As of 2018, companies that do business in the European Union  (as many online businesses do) also must comply with the specificities of GDPR which requires certain technical and backend changes, there were implications to data storage and processing, and marketing teams had to shift their practices around data management and opt-in. 

All of these protections are good for the consumer, but I have to say that in the 20 years of dealing with revision after revision of policies, documentation, and practices to remain in compliance with the ever increasing spiderweb of the consumer privacy regulatory landscape, it’s really the lawyers that are winning in the end. With every new law, with every revision, the lawyers have to get called to audit for compliance. 

So now with AI, something very similar is happening but at a much grander scale. No less than 30 states in the US (plus the District of Columbia) have already signed into law, are in the process of voting, or are looking into their own individual state policy for AI governance. And the range of governance oversight is broad – job loss, healthcare, privacy, data mining, housing, transparency, the list goes on. Is there a better way of doing this that serves to effectively protect consumers but does not create decades of backlog work for lawyers and an ongoing requirement to have a standing carve out of legal fees in company OpEx budgets? 

Perhaps. The European Union on March 13, 2024 passed the AI Act and it’s set to go into law sometime this May or June. Touted as the first to have a comprehensive framework for AI governance (it was first introduced in 2021), the European Commission states that “the AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs).” 

We are still early days in both understanding what AI can do for humanity and what needs to be regulated in order to protect the people from harm. But we do have a good sense of what kinds of human behaviors are considered safe, risky, and dangerous. An AI framework is important enough to be enacted federally, to govern and protect everyone equally. Everyone will be affected. Let’s not piecemeal this one with each hot incident – the lawyers have enough work. 

Comment On This Post: