Elevating Global Awareness

Will Griffin, Chief Ethics Officer at HyperGiant Discusses Ethics in AI in Apogeo Spatial Executive Interview

Will Griffin, Chief Ethics Officer at HyperGiant Discusses Ethics in AI in Apogeo Spatial Executive Interview

Gisel Booman
Regen Network

Sarah-Baxendell
Regen Network

Myrna James Yoo Publisher, Apogeo Spatial

I had the distinct pleasure of interviewing Will Griffin just a few weeks prior to his winning the 2020 IEEE Award for Distinguished Ethical Practices. He’s the Chief Ethics Officer of Hypergiant Industries, a Texas company offering a suite of AI services. Will and I discussed his Top of Mind Framework that analyzes how a client is using AI. Listen in!

Will Griffin
Chief Ethics Officer at HyperGiant


Myrna James: I’m really excited to be here today with Will Griffin with HyperGiant, which is one of the new, very cutting edge AI companies focused on developing world-changing technology in the areas of space, defense and critical infrastructure. There’s a lot of overlap there with Apogeo Spatial, because as you know, we are about geospatial data and now with data fusion and analytics and these are the technologies that are enabling solving the world’s biggest problems. So I’m really thrilled to be speaking with Will, specifically about ethics in AI. But first let’s hear an introduction.

Will Griffin: Will Griffin, Chief Ethics Officer of HyperGiant Industries, and I’m super excited to be here. Thank you for having me.

Myrna James: Absolutely. Give me an idea about your path, about how you came to this position. And I’m fascinated because ethics in AI is a fairly new field. So I’d just love to hear a little bit about your trajectory.


“So the goal for ethics in tech and our program is to bake ethics into the development and design process so we can forecast the impact on our constituency, which we believe is all of humanity.”


Will Griffin: Well, I’ve always been interested in what I would consider to be cutting edge industries, and my background has primarily been in media. I went to a science and math high school here in Austin, Texas. And then from there I went to Dartmouth College. It’s a place where the early internet was created, a bunch of programming languages were created. And we were one of the first campuses to actually be wired with an intranet that connected to the internet. And so, that was one of the things that was attractive to me about it, that it was a cutting edge college.

And then from there, I went to Wall Street at Goldman Sachs and I worked in a pretty innovative group, which was all asset-backed securities, included in it was mortgage-backed securities, which ultimately blew up the globe in 2008. So when I was there in the early 90s, that’s when we began to create all those financial instruments like collateralized mortgage obligations, mortgage-backed securities, asset-backed securities… Obviously I left long before it blew up, but at the time it was an exciting place to be on Wall Street.

From there I went to Harvard for law school, also did a year where the courses at the school in the entrepreneurship curriculum, and then went into McKinsey & Company where I was consulting media, primarily with large media companies like Disney, Time Warner, Turner. Especially at that time there was a lot of industry consolidation, so most of the work we were doing was post-merger integration, but we did some fun stuff, like for Disney, we helped launch the new theme park in California, which was a lot of fun.

And then I went to go work at a company, News Corporation, which at the time it was an exciting company because we were continuing to aggregate and grow. So at the time we owned DirecTV, almost all of the satellite delivery companies at least in the English speaking world, Fox News, and a lot of interactive stuff. Since then I’ve been in small companies. I find it to be more exciting and I feel like your work can have a lot of impact on the company itself. And then if you’re in a growing industry, then your company can have an impact on the industry.

So I’ve been at HyperGiant for two years. I was attracted to the vision of our founder and CEO, Ben Lamm. And he has a mantra, which is delivering the future we were promised, which is to use technologies in ways that it has been envisioned. And that’s like the flying cars, human beings able to go to space, using technology in a clean way that can help solve the climate crisis and other challenges that are faced by humanity. In order for us to accomplish that mission, Ben suggested to me, and I actually agree, that ethics will be the defining line of whether technology ultimately has a beneficial impact for humanity, or it becomes a stumbling block for humanity which pits human beings against each other ultimately. And so, the goal is to be on the right side of history, operationalize ethics into our workflows and do it in a way that it becomes an example for the rest of the industry.

Myrna James: That is so, so, so important. I love your really broad and deep experience. Most people have settled with either one or the other, right? I remember when Time Warner was merging with AOL, I worked at Time Warner right around then – prior to then actually. 

Will Griffin: Oh wow! So you worked for Time?

Myrna James: Yeah, I worked in Chicago back in the 1990s, for what was Time Warner at the time. And then they were merging with AOL and it was the beginning of the internet and I literally did have this intuition that this is not going to go well. It took a few years to play out, but ultimately I was right. So it’s funny, my background is more traditional publishing, but now, after 18 years of working in the geospatial industry by owning this magazine, I have a very broad view of how data, especially geospatial data, is being used.

So it’s all coming together. I just want to make a point for my listeners and the people who read Apogeo Spatial regularly, and now listen to it as we’re doing more podcasts. All different types of data that’s out there is being merged and fused together. And now, these days, whatever project you’re working on is going to have geospatial data as an underlying layer almost no matter what. It’s just like GPS being embedded into everything we do as well. We don’t even know it’s embedded, but it is. And then we have benefits from that on our smartphone. So that is fantastic, and it’s true, ethics will be the defining line of the future for success, but also for ultimately having these projects benefit humanity. I think if you drill down, is that what we’re talking about – causing harm for humanity or not?

Will Griffin: Well, that’s what it comes down to. So our framework basically has three core elements to it and it’s based on Immanuel Kant’s deontological perspective. So the first step is goodwill. Is there a positive intent for the use case? There the use case owner, or in this case, the designer and the developer of the particular AI, the burden of proof is on them to say there’s a positive intent for the technology that they intend to develop and deploy.

Once you clear that hurdle, you go to the second phase, which is the categorical imperative, which is a maxim which says that if every company in your industry, every industry in the world, use technology in the way that you contemplate, what impact would it have on the world? So what this does is it requires the use case owner to not just think of its constituency or its stakeholders as just the client or just the first user of the technology, but you have to think of second and third, fourth order effects on humanity and the rest of society with the technology when it’s deployed.

The third step is the law of humanity, which many people know of as, are people being used as a means to an end, or is the intent of the technology to benefit people? So that’s where we start. You have to answer those three questions in an affirmative way before we’ll move forward with the project. And if you can’t answer those three in an affirmative way, then either the project is modified, which happens most of the time, or the project itself is shelved. If companies ask that question, it creates a situation where all of humanity becomes a stakeholder to the technology and then you will feel like you owe obligations to all of humanity, and then that will impact the way you design, develop, and deploy the technology.

Myrna James: So you have those three screens, walking through that process for your potential clients. I’m sure most of them do have to tweak it or make changes to make sure. That’s really great because you’re helping them before you even start to think through the long-term implications. And that’s so important because what I’m most afraid of is that some of the AI that’s out there with different applications is being unleashed without that guidance…

Will Griffin: And we saw it happen. I’ve been in a couple of different areas where we saw people or industry actors try to get off of the moral question by saying that there were unintended consequences. So obviously the Silicon Valley mantra is move fast, break things, but Silicon Valley didn’t invent that, that was also Wall Street’s mantra…

Myrna James:  …where you have a lot of experience.

Will Griffin: Yes. So move fast, make money. The innovation is always ahead of the ability to regulate or policy makers. So when you’re in the innovation world, a lot of times people have been able to get off the moral hook by saying, “Oh, these are unintended consequences.” So two examples in my background or one when I worked in the mortgage-backed securities group at Goldman Sachs, when we were creating derivatives and collateralized mortgage obligations, asset-backed securities, no one could forecast that the urge to make money or the greed factor would ultimately lead to a situation where credit was going to be made too easy to borrowers who weren’t necessarily credit-worthy. And these packages were going to be sold off to mom and pop investors and institutional investors in a secondary market when those loans became securitized.

Ultimately, it led to a lot of non-performing loans, or people who couldn’t pay the loans, or crazy situations where there are just mortgages where people didn’t understand the loans that they were taking, so ultimately the entire mortgage market collapsed in 2008. You would have forecast if you would have gone through what I call TOM, or “top of mind ethics,” which is the framework that I just laid out. If that was top of mind, then we could have perceived, well, if we have easy credit, then people would take that easy credit, even if they weren’t able to pay those loans. If that happened, and those products were available all over the world, what would happen? Too many people would take loans that they ultimately couldn’t afford. And if we knew we were selling those bad loans essentially off to mom and pop investors, it was foreseeable that there would eventually be a crash, but there was no ethical vetting done at the time those products were unleashed.

Myrna James: You’re saying that could have been avoided, if you even had these processes you’re talking about now with the “Top of Mind” ethics process – if those things had been considered, that would have been avoided.

Will Griffin: Absolutely.

Myrna James: The big short in 2008 would have been avoided, the housing crisis, the housing bubble would’ve been avoided.

Will Griffin: Absolutely. 2008 doesn’t happen if in 1992, ’93 and ’94, this level of vetting was done. So that’s the goal. The goal is to use the same imagination that creates innovative products, is the same imagination that should be used to foresee the impacts on our stakeholders, which we can say are all of humanity.

Myrna James: I’d love for your comment on what just happened with GameStop stock, “Main Street” versus Wall Street that just happened. Please just share your wisdom.

Will Griffin: I think what happened with GameStop, let’s take a look at the big picture. Ultimately what happens is Wall Street, hedge funds, institutional investors are essentially playing a game, and that game is basically, use the information that they have about the market and about the individual companies to create weigh-ins for themselves, which is the opportunity to make money. Now that information is becoming more widely distributed; individual investors are also seeing how that game works.

Myrna James: And especially after being exposed in 2008, people understand how shorting works now.

Will Griffin: And being exposed to video games and to social media, where the higher your follower account is the more successful you are. They aggregated themselves in a way that they can also play the game consolidating their individual accounts into a form of market power. And so, in the process, the actual companies themselves, and the employees, and the traditional investors of those companies basically have become pawns in a game between the hedge funds and the individual investors. I personally don’t think that there is a morally superior actor in that, because there’s nothing about the actions on either side that are linked to actual true underlying fundamental value.

Myrna James: That’s a really good point. Really, just technology, internet, Reddit self-organizing has allowed individuals to, at least, as you said, play the game, because they joined forces and then they were able to influence the market. So it’s interesting, isn’t it? I was really fascinated watching all that happen.

Will Griffin: The underlying companies and their employees are going to be ultimate loser in that, because inevitably the actual true value of GameStop will be revealed. And when that happens, everyone who doesn’t get out of the game in time will be banned. So the reason why I say it, is because there’s a huge information asymmetry. Hedge funds have these large Bloomberg terminals and access to all the data. These individual investors in Reddit have aggregated a certain segment of the data that they need to execute the strategy that they want. Now, there are other people who are hearing about it on social media, who are hopping in, they don’t have access to this data. They’re not able to follow along, so they’re just limiting to riding away and they won’t know when the game’s over.

Myrna James: Dangerous. Maybe one little shining light is GameStop’s value will be at least higher than it would have been from the visibility because people in general will go out and shop there more, right?

Will Griffin: Right.

Myrna James: It’s certainly given them a lot of visibility, and I have a 15-year-old son and we go to GameStop.

Will Griffin: But the fundamentals are against that business in the same way that the fundamentals were against Blockbuster.

Myrna James: Of course – because brick-and-mortar will ultimately not survive in the digital economy. Thank you for that little side note there. 

So I have a couple of questions specifically around AI ethics. I know of course, trust is a big issue. People need to trust the initial data that goes into the machine learning and the AI and people need to trust that there’s a guiding light like the TOM that you mentioned – the top of mind ethics. Can you talk a little bit about trust?


“We try to evangelize to the developers and the designers, the burden of proof is not on the fact checkers, it’s not on the regulators, it’s not on the general public. The burden of proof is on the developers and the designers to be creative and imagine all of these impacts on the rest of society and to create a technology in a way that minimizes those impacts and maximizes the benefits.”


Will Griffin: Well, the whole objective is to engender trust. So a mantra that we use around here is that ethics equals trust, and it’s the trust that leads to economic value creation. And that’s true from the early trading relationships in the Mesopotamian Valley, the early trading system all was based on trust. There were no credit scores. You didn’t have a credit history. It was all basically on your reputation. And that is the way when we moved from a hunter and gathering society to agricultural society and became a more sedentary society, all of human survival was based on the trust that from your labor, we would trade what was created for something that we need, and then we would produce something that you need. All of civilization was built on trust…

Myrna James: … and the agreement of value, right? The agreement of value as well.

Will Griffin: Then the agreement and exchange of value. Completely, yes. And we trust that you will deliver what it is that you say you will deliver, and we will deliver what it is that we say we deliver. Without that – those early trading relationships and trust – civilization would have been impossible, because no single groups of human beings are able to produce all of the things that you need to live and survive. So trading was a requirement.

Myrna James: And we also need transparency, right? It brings up the need for transparency of information, data, background… And then that also leads me to blockchain. And in the future where we’re going with everything is to a new technology that’s on the blockchain where all the information is visible to everyone. If changes are made to a transaction, it’s visible, right?

Will Griffin: Transparency is hugely important, but it still boils down to trust because it becomes impossible for an individual human or individual communities to verify. So going back to what you described, which is your kind of core focus area, which is geospatial data, not everybody knows how GPS works, right?

Myrna James: Right.

Will Griffin: You’re on mountain time, I’m on central time, we agree that we’re going to talk at 11:15 my time, 10:15 your time. We don’t know exactly how GPS works to set that time, we’ve just made an agreement, right? So in your world, there is now a competitive system to GPS. China has created their own system of GPS because it’s a geopolitical power struggle because they don’t like the rest of the world essentially being on our time. So they’ve created a competitive GPS, and they’re trying to get countries to subscribe to their time. It’s like the metric system versus our customary system. And all of our phones are linked to it, our cable systems, pretty much all of civilization, at least in the Western world has agreed on GPS. But there are very few people who can honestly articulate how it works. I’m sure you can because you’re an expert in space, but most people can’t. 

Myrna James: And there’s also Galileo, the GNSS system created by the European Union.

Will Griffin: And there are people who are going to verify. So one of the things, at least in our world in AI and blockchain, (I would consider being part of those emerging technologies) is that there’s a whole movement towards transparency, but the real question is transparency to who? Because the majority of society is not going to be able to verify, even if you put all of the code out there.

So there’s always still this element of trust, which is why we try to evangelize to the developers and the designers, the burden of proof is not on the fact checkers, it’s not on the regulators, it’s not on the general public. The burden of proof is on the developers and the designers to be creative and imagine all of these impacts on the rest of society and to create a technology in a way that minimizes those impacts and maximizes the benefits.

Myrna James: That’s a really good point, that even with transparency, you still have to have the underlying trust.

Will Griffin: Right. I was at a company called eUniverse. Subsequently, when we went public it was Intermix Media, and within there, we had a company called MySpace. Well, there was a company called Friendster, which came and went pretty quickly, but MySpace was the first social media company to really catch fire. On the backend of that, we were able to aggregate all of this personal data. Keep in mind, this is when people put up personal profiles, and they described everything about themselves, and people began to expose themselves in ways that we hadn’t seen on the internet and in very personal ways.

Well, anyway, so all of that data, there were implied consent forms and privacy forms, but there was no way that the kids or young adults on that platform would be able to understand what rights they were giving away. And to be honest, there was no way that the early MySpace, and FaceBook, and Friendster technologists could understand even the value of the data they were getting. It wasn’t a data mining operation; it was an eyeball aggregation operation.

Myrna James: It literally was just advertising at the time.

Will Griffin: It was eyeballs, and traffic, and page views, that was the whole game until advertisers then began to ask for the actual data and targeting became more important. And so, when that happened, consumers had no way of knowing how their data was going to be used. They couldn’t imagine it. So the burden was really on the developers and designers, and what we’re trying to do now in this current environment with Facebook and Twitter is trying to retrofit regulations onto an innovation that has become so massive.

And one of the main triggers and reasons why we’re doing this is because now we realize adversarial actors have figured it out and they’re exploiting the system, and greed and the profit motive have come into the picture. And so, the companies themselves have become bad actors. And now we’re trying to retrofit regulations on it. 

Myrna James: Exactly.

Will Griffin: So the goal for ethics in tech and our program is to bake ethics into the development and design process so we can forecast the impact on our constituency, which we believe is all of humanity.

Myrna James: That makes sense. One of the quotes from the FaceBook exec in The Social Dilemma, the Netflix documentary about this, is fairly astonishing that they say, “Oh, well, we didn’t intend for this to happen. We didn’t intend for our AI to run amok basically.” But whether you intend to or not, it’s happening, so there has to be some accountability there.

Will Griffin: Yes. And so the accountability has to be two ways. One is we’ve seen in the mortgage crisis, how difficult it is to retrofit ethics onto innovation, and we’ve seen with social media, how difficult it is to retrofit ethics into a situation to create accountability, which is why the burden is now, on all innovations going forward, ethics needs to be baked into the way you design and develop, and you have to use your creativity and innovation to forecast all the things that could go wrong.

So the question that needs to be answered, which is to your point of accountability, what happens when you don’t use your imagination to forecast the things that go wrong and protect society against it? What are the penalties? Well, in the old days, in the early trading societies and agricultural society, well, if you sell me something or trade something with me that has no value, I stopped trading with you, and that in and of itself should represent an existential threat to you. Because now if you don’t get food, or energy, or other supplies or services, you can’t survive well in this world, because the wealth has been created in such a way and horded in such a way that even if one constituency leaves you, you still have other constituents, so your survival is not at stake.

So if I have a privacy violation in Europe and I violate GDPR, if you’re FaceBook, a $5 billion fine. Well, Facebook has a couple of $100 billion on its balance sheet. That five billion is just a cost of doing business. By breaching the trust with society, the penalty is not big enough, it doesn’t represent an existential threat enough for you to change your behavior. So I forecast of the next couple of years, the penalties will be ratcheted up in such a way that breaching the trust and causing harm to society will ultimately lead to companies being penalized out of business. Otherwise, there’s really no deterrent to the behavior. 

Myrna James: Exactly. I just have a couple more questions for you. The next thing I wanted to ask about is another documentary called The Coded Bias and it’s really about the unethical facial recognition biases that exist. I don’t know if you’ve seen that one. It’s an example of biases being baked in, which is inappropriate, right?

Will Griffin: Well, that’s actually a great work. Joy at the Algorithmic Justice League is one of the lead people on that. And it was brought to light in two ways. So I feel like that’s one of the best ways to use evangelism because Coded Bias was out, but Coded Bias was also supported by the research that they did at MIT. So they were able to use the nomenclature and speak in the language that engineers could understand. While at the time, we’re in a pandemic, then George Floyd happens, then it becomes clear to everyone in society at large, how facial recognition was being used to track down protestors. Essentially, that’s how the issue was highlighted. Law enforcement was using facial recognition to track down protestors.

Once that happened in a zeitgeist moment, when all of the world was paying attention, then the scrutiny turned back to the use of facial recognition, then the MIT work in the things that were highlighted in Coded Bias became clear to everyone that this is not an isolated incident, that this inaccurate and bias technology is being used all over the country and it’s negatively impacting lives. The blow back was so substantial in that moment that, as you know, IBM pulled out of the facial recognition business altogether. Amazon and Microsoft announced a one-year moratorium on selling facial recognition to law enforcement, and a lot of municipalities around the country pulled their facial recognition programs, including the city of Detroit, which is a majority black city, had a 94% error rate on the accuracy of their facial recognition. 

Myrna James: Oh my gosh!

Will Griffin: So that’s an example of how ethicists who are well-versed in the engineering side of it and well-versed in the technology can communicate a problem and an issue. And once it becomes commonly known, then they can act in a way that causes the designers and developers to act, which is what happened.

To me, that’s one of the most powerful case studies of how it’s always worth it to point out unethical behavior on the part of companies. Because if you get a large enough constituency, it can cause them to act. It can make the penalties large enough that they have no choice, but to either pull or modify the technology. We’re going to see more of that. There are cases all over the country… Right now I’m team-teaching a course at Penn State Law School on ethics and AI, and one of the cases that we’re going to talk about is the Ofqual case in the UK.

So basically because of the pandemic, students weren’t able to take their end of the year standardized tests. In the UK, those standardized tests are even more important than they are here because they determine a placement in college and university. Well, instead they created an algorithm that would guess what your score would have been based on your individual grades and based on the scores of the people who have come from your high school in past years.

Myrna James: Oh my Gosh.

Will Griffin: Obviously, there were so many flaws in the design of this program, which was with the department of education in the UK. When the scores came back, they were so off and out of line with the individual students, but the worst part was if you were from a majority-minority school and you say you are a killer student of majority-minority school, because of the past test scores at your school, your scores were projected lower than they would have been if you had sat for the test. Enough people were affected by it that there were actually riots in the streets of the UK that ultimately led the department of education to change that policy and to get rid of those scores. So the students now had an option of taking those scores if they liked the scores that the algorithm created or getting a teacher to create a score for them. That’s just an example of the types of ways that the algorithms are going to get people to take to the streets.

Myrna James: Wow, that’s incredible. I did another interview a few weeks ago with Matthew Bailey. His new book is about ethics in AI and his company is AIEthics.World and I just met with Maria McAndrews as well. And it’s so just fascinating to realize how deep it goes. I just have to reiterate that I know I spoke with you about this earlier, this is just so important. I think this is one of the most important issues in tech right now. Because we can’t just allow these machines, computers to just run amok without having some kind of stop gap for them and some sort of fence around it. Is there anything else that you’d like to share that we have not covered?

Will Griffin: We’ve covered a lot of ground here. And I thank you for spotlighting the work and thank you for the work that you are doing. I think it’s the defining issue of our time. 

There is a group, AI Global, along with the World Economic Forum, that is doing what I consider to be very important work. They have a Responsible AI Certification project underway, where companies, and designers, and developers will be able to take their AI plans and projects, go to them as an independent third-party validator to have them vetted and get a rating. It’s like consumer reports, or the Good Housekeeping Seal of Approval, or J.D. Power ratings, where there’ll be a third party entity that will be able to rate your project, your processes, your data, and even ultimately rate your company based on the criteria that we talked a lot about, about fairness, transparency, accountability, along those types of standards.

Myrna James: Eventually, there will be pressure for companies to get that certification to prove that they’re actually doing AI in an ethical way, is that right?

Will Griffin: Without question. And then it becomes a roadmap for regulators. So it’s like the FDA has nutrition labels on food. You can’t go into the market without a nutrition label and sell food in the United States, at least. And if you don’t, then you have to disclose that these claims have not been approved by the FDA. So the consumer knows that you weren’t willing to allow your work, or products, or services to be vetted.

Myrna James: So eventually that’s going to be happening. That’s really good to know. And there’s a tie there to the World Economic Forum that they’re working on that AI Global and they’re doing that together.

Will Griffin: Yeah.

Myrna James: So good to know. Thank you so much Will. I really appreciate your time, I appreciate your wisdom, and enthusiasm. This has just really been amazing. Thank you so much for joining me.

Will Griffin: Thank you, Myrna. I appreciate it, be well.