So to your first question, I think you’re right. That policymakers should really define the guardrails, but I don’t think they should for everything. I think we have to choose those areas that are more sensitive. The EU has classified them as high risk. And maybe we could get out of that, some models to help us think about what’s high risk and where we should be spending more time and, potentially, policy makers, where should we be spending time together?
I’m a big fan of regulatory sandboxes in terms of co-design and feedback co-evolution. Uh, I have an article coming out in an Oxford University Press book about an incentive-based grading system that I might talk about in a moment. But I also think that on the flip side, you all have to consider your reputational risk.
As we move towards a much more digitally advanced society, it is also incumbent upon developers to do their due diligence. You can’t afford as a company to go out and put an algorithm that you think, or an autonomous system that you think is the best idea, and then land on the front page of the newspaper. Because what it does is it degrades your consumers’ trustworthiness of your product.
And so what I’m saying, you know, on both sides is that I think it’s worth having a conversation where we have certain guardrails in terms of facial recognition technology, because we don’t have the technical accuracy when it’s applied to all populations . When it comes to disparate impacts on financial products and services. There are great models that I’ve found in my work, in the banking industry, where they really have triggers because they have regulatory bodies that help them understand which proxies really have a disparate impact. . There are areas where we’ve just seen this right in the housing and appraisal market, where artificial intelligence is being used to, um, replace subjective decision-making, but contributing more to the kind of discrimination and predatory assessments we see. There are certain cases where we need policymakers to impose barriers, but more so that we be proactive. I tell policy makers all the time that you can’t blame data scientists. If the data is horrible.
Anthony Green: right
Nicol Turner Lee: Put more money into R&D. Help us build better datasets that are over-represented in certain areas or under-represented in terms of minority populations. The most important thing is that you have to work together. I don’t think we’re going to have a good win-win solution if policy makers are really, you know, leading this or data scientists are leading it themselves in certain areas. I think you really need people working together and collaborating on what those principles are. We create these models. Computers don’t. We know what we are doing with these models when we create algorithms or autonomous systems or ad targeting. We know! In this room, we can’t sit and say, we don’t understand why we use these technologies. We know this because they actually have a precedent of how they have expanded into our society, but we need some accountability. And that’s really what I’m trying to get at. Who makes us responsible for these systems we are creating?
It’s very interesting, Anthony, these past few weeks as many of us have seen the, uh, conflict in Ukraine. My daughter, because I have a 15-year-old, has come to me with a variety of TikToks and other things she’s seen to say, “Hey mom, do you know this is happening?” And I’ve had to pull myself back because I’ve gotten really involved in the conversation, unknowingly somehow, once I go down that road with her. I keep getting deeper and deeper into this well.
Anthony Green: Yes.