Trust By Design: Dissecting IBM’s Enterprise AI Governance Stack

Reading Time: 32 minutes
Save as PDF 

P.S. The video and audio are in sync, so you can switch between them or control playback as needed. Enjoy Greyhound Standpoint insights in the format that suits you best. Join the conversation on social media using #GreyhoundStandpoint.


IBM recently hosted an AI Governance Analyst Briefing, and the ideas presented during the call were exceptional. As an analyst, I find it tough to even write a word like “exceptional”, for I take seriously the use of vocabulary in a research note that I author. Each word must carry weight and convey the reality on the ground in the truest sense – even if it means rubbing a few shoulders wrong. So, I use “exceptional” with a great sense of responsibility.

The insights shared by IBM executives during the call validated what many in the world of technology already know: artificial intelligence governance is today a real commercial need rather than a theoretical one. Companies without robust AI governance run major operational and reputational consequences since legal frameworks like the EU AI Act and other global watchdogs seek to redefine corporate AI compliance and tighten supervision.

A long-time advocate of ethical and transparent artificial intelligence, IBM presented its governance strategy covering risk framework and tools for regulatory compliance, WatsonX.governance, AI governance platform, and ethical and open artificial intelligence principles. What struck me was that AI governance is about embedding risk controls into the AI lifecycle rather than following compliance checklists.

Driven by this, I dug further into IBM’s AI governance approach and produced this research paper to investigate what IBM is doing, how it fits evolving legislation, and what lessons companies may take from its approach. This is not merely another industry update but a must-read for technology decision-makers. Ignoring this topic is probably the worst mistake any technology decision-maker will make in their career – hence the suggestion to read more on this topic, including this research note.

The risks of bias, security vulnerabilities, and regulatory non-compliance are growing as companies rush to include artificial intelligence in their corporate models. AI may turn from a competitive advantage to a liability without a disciplined governance system. Understanding these difficulties, IBM has developed among the most thorough AI governance systems in the business.

Underlying IBM’s governance strategy is a multi-layered approach combining technology enforcement tools, risk management, organisational accountability, and ethical standards. This guarantees that artificial intelligence systems comply with changing laws and are open, fair, and understandable. Unlike conventional governance methods based on post-deployment compliance checks, IBM embeds governance throughout the AI lifetime—from creation to deployment and monitoring.

Across our global advisory work with clients in banking, logistics, and telecom, we’ve seen this very model of embedded governance becoming the baseline ask. Sharing an example from Greyhound Fieldnote, insights from our ongoing advisory work with end-user clients, a global logistics firm refused to move an AI model into production until risk and fairness controls were validated through CI/CD-integrated checkpoints—a real-world illustration of IBM’s principles in action. This shift isn’t anecdotal.

Per Greyhound CIO Pulse 2025, 68% of enterprise technology leaders now classify AI governance platforms as “critical-to-operations,” up 27 points since 2023. This signals a broad move away from checklist-based compliance toward systemic, code-level enforcement.

We at Greyhound Research believe IBM’s governance model doesn’t rely on training decks or policy binders—it’s woven into the fabric of how AI gets built. This is a hard-won lesson for enterprises navigating scale, speed, and scrutiny. By baking in traceability, fairness, and explainability from the start, IBM moves beyond philosophical commitments to operational certainty. And in a world where trust is measurable—and increasingly contractual—that separation is everything.

This section details IBM’s AI governance framework, examining its operational procedures, ethical-by-design approach, governance structure, and values.

Responsible AI Principles and Pillars

IBM grounds its AI governance in a set of core principles. The company’s Principles for Trust and Transparency state:

  1. AI’s purpose is to augment human intelligence
  2. Data and insights belong to their creator, and
  3. technology must be transparent and explainable

These high-level principles are operationalised through five Pillars of Trustworthy AI: Explainability, Fairness, Robustness, Transparency, and Privacy. For example, IBM emphasises that AI systems should be fair (mitigating bias), explainable to users, robust against attacks or failures, transparent via disclosures (e.g. AI FactSheets), and protective of user privacy.

To support these pillars, IBM has developed open-source toolkits like AI Fairness 360 and AI Explainability 360 for bias detection and interpretability, Adversarial Robustness 360 for security, and an AI Privacy toolkit. This integration of tools and principles ensures fairness, explainability, and other ethical considerations are built into IBM’s AI development process.

Across our enterprise conversations—especially in regulated industries like healthcare and financial services—these principles have started influencing not just model outputs but procurement decisions and platform evaluations.

Sharing an example from Greyhound Fieldnote, a European healthcare provider working with our advisory team flagged a mislabelled training set that would have skewed patient eligibility predictions. The issue was caught using IBM’s Explainability 360 toolkit and resolved before deployment—avoiding what could have been a costly and reputationally damaging incident.

Per Greyhound Risk & Ethics Pulse 2025, 73% of enterprise compliance leaders now prioritise explainability and fairness above performance when evaluating third-party AI solutions, signalling a shift in what truly counts as trustworthy AI.

We at Greyhound Research believe IBM’s Responsible AI principles are not framed as abstract virtues—they’re rendered into usable, operational tools that developers and governance leads can actually work with. By offering open-source kits and building explainability into every layer of model development, IBM turns ethics into action. In today’s regulatory environment, that translation from value to verification is what separates software from liability.

Ethics by Design and Risk Management

IBM integrates these principles into a formal Ethics by Design methodology that guides its teams through the AI lifecycle. This internal framework embeds ethical risk management at each technology development and deployment stage. It provides IBM developers and data scientists with concrete guidance (e.g. checklists and best practices) to identify and mitigate risks like bias or lack of transparency early in the design of AI models.

Notably, IBM aligned its internal processes with the U.S. NIST AI Risk Management Framework (AI RMF) as soon as it was released. IBM contributed to NIST’s multi-stakeholder development of the AI RMF and, after its January 2023 publication, conducted a three-phase internal review to map NIST’s core functions (Govern, Map, Measure, Manage) to IBM’s governance practices. The analysis found IBM’s controls and ethics-by-design practices were well-aligned with NIST’s standards for AI risk management, confirming that IBM’s methodology covers all key risk areas through the AI lifecycle. This proactive alignment shows how IBM builds formal risk management and compliance into its AI governance methodology.

This early alignment with NIST has set IBM apart in client conversations where regulatory clarity is paramount. Sharing an example from Greyhound Fieldnote, in a multi-region engagement with a Fortune 500 financial services client, we saw firsthand how IBM’s NIST-based risk mapping helped resolve a months-long standoff between model developers and the legal team, turning a policy bottleneck into a validated checklist built into the model review process. According to the Greyhound Regulatory Preparedness Pulse 2025, only 26% of enterprise leaders feel confident that their AI risk frameworks map cleanly to global standards such as NIST, ISO, or the EU AI Act—making IBM’s alignment a key differentiator in regulated markets.

We at Greyhound Research believe IBM’s approach to ethics isn’t just internal policy—it’s an exportable model that translates regulatory abstraction into day-to-day product and engineering workflows. By grounding its methodology in NIST and operationalising risk across the lifecycle, IBM allows enterprise teams to move beyond “principles on paper” and into measurable, defensible action. In the current climate of AI scrutiny, that kind of executional clarity has become table stakes.

Governance Structure and Accountability

IBM has instituted a comprehensive internal governance structure to oversee AI ethics and compliance. The IBM AI Ethics Board sits at the centre of this framework and provides centralised review and decision-making on AI use cases, policies, and practices. The Board includes diverse leaders from across IBM, who are co-chaired by IBM’s Global AI Ethics Leader (Dr. Francesca Rossi) and Chief Privacy & Trust Officer (Christina Montgomery). Its mission is to ensure IBM’s AI development and deployments align with the company’s values, to advance trustworthy AI for clients, and to hold the business accountable to ethical commitments.

Surrounding the Board are four key roles that make up IBM’s AI governance framework :

  1. Policy Advisory Committee – a group of senior executives who provide top-level oversight of the AI Ethics Board and set IBM’s strategy and risk tolerance for AI. This committee ensures AI governance is backed by leadership and integrated with IBM’s overall business risk management.
  2. AI Ethics Board – the central, cross-disciplinary body described above reviews sensitive AI use cases and makes decisions on ethical issues, product ethics reviews, and communications. The Board also issues guidance (e.g. a recent Point of View on foundation models addressing generative AI risks) and updates IBM’s policies as needed.
  3. AI Ethics Focal Points – appointed representatives in each IBM business unit who are trained in AI ethics and serve as first-line liaisons. They evaluate AI projects in their unit for potential ethical concerns, help teams mitigate risks (like biases in a specific AI solution), and escalate issues to the central Board if necessary. This ensures local oversight and early detection of ethics issues in day-to-day projects.
  4. Advocacy Network – a grassroots network of employees across various roles who champion IBM’s ethics principles within their teams and share best practices. This helps cultivate a company-wide culture of responsible innovation, as employees at all levels internalise and promote AI ethics norms.

A Chief Privacy Officer (CPO)-led AI Ethics Project Office supports all these roles as a coordinating hub. This team helps implement Board decisions, coordinates training and tools, sets meeting agendas, and keeps the Board informed of industry trends and regulatory changes. Through this multi-tiered governance model, IBM embeds accountability at every level, from executives defining risk appetite to the Ethics Board providing oversight to focal points and advocating for operationalising ethics in daily AI development.

The result is a “multidisciplinary, multidimensional” governance approach that IBM publicly espouses, grounded in its trust principles and continuously updated to meet evolving ethical and legal expectations.

This central governance model is becoming a reference point in our conversations with global clients navigating decentralised AI development across business units. Sharing an example from Greyhound Fieldnote, during an advisory engagement with a multinational consumer goods firm, we observed how the absence of embedded ethics leads often caused AI projects to stall at the compliance review stage.

IBM’s use of AI Ethics Focal Points as embedded champions helped this client reimagine their own approach—moving from central gatekeeping to distributed, accountable design.

According to the Greyhound CIO Governance Pulse 2025, 61% of technology leaders now say that successful AI governance depends more on day-to-day enforcement roles than executive committees—a sign that structural decentralisation is becoming the norm.

We at Greyhound Research believe IBM’s governance structure strikes the right balance between oversight and execution. While many vendors talk about ethics, few can show how those values cascade into operating roles, policies, and product decisions across business units. IBM’s ability to institutionalise trust—through dedicated roles, embedded training, and cross-functional decision flows—positions it as a pragmatic leader in enterprise AI governance. And as decentralised AI development continues to scale, this model will only become more relevant.

Operational Processes

In practice, IBM’s governance framework translates into defined AI product development and deployment processes. Certain high-risk AI use cases cannot proceed without the Ethics Board review and approval. All AI projects are expected to follow IBM’s Ethics by Design guidelines (e.g. conducting bias assessments documenting decisions), with the AI Ethics Focal Points ensuring compliance in their units.

IBM has also developed AI FactSheets – essentially model documentation standards—to capture transparency information (data sources, model intent, performance metrics, bias testing results, etc.) for each AI model. These FactSheets travel with models from inception to deployment, providing an auditable record of how the model was built and its ethical risk profile.

IBM’s internal Integrated Governance Program links data governance with AI model governance, improving the traceability of data lineage and model updates across the organisation.

Together, these measures instantiate IBM’s principles into repeatable practices. The AI Ethics Board, for instance, can require teams to implement additional mitigations if a use case review finds misalignment with IBM’s fairness or transparency standards. IBM also mandates regular training for its employees on AI ethics. Over the past five years, IBM has built an “AI ethics culture” through employee education, advocacy, and making ethics a performance consideration, reinforcing accountability beyond formal structure.

This level of operational linkage has become a recurring theme in our discussions with enterprises attempting to unify data and AI governance across siloed business functions. Sharing an example from Greyhound Fieldnote, in a recent transformation engagement with a global telecommunications company, IBM’s governance framework helped the client replace their fragmented model inventory with a centralised, FactSheet-driven repository. This not only improved traceability but enabled faster compliance reporting and more confident sign-offs from non-technical stakeholders. According to the Greyhound AI Ops Pulse 2025, 74% of AI leaders now view model auditability and traceability as “non-negotiable” procurement criteria—up from 56% in 2022.

At Greyhound Research, we believe IBM’s AI governance methodology starts from clear ethical principles and uses a combination of organisational structures (Ethics Board, committees, focal points) and technical tools/processes (Ethics by Design playbooks, bias toolkits, FactSheets) to enforce those principles. This ensures considerations like fairness, explainability, and risk management are systematically integrated into AI development.

In our view, what truly stands out is the efforts made to streamline the detection and management of AI ethics concerns via a formal governance framework. IBM acknowledges that broad principles are “insufficient” unless backed by concrete review processes and accountability mechanisms. By combining top-down oversight with bottom-up engagement and tooling, IBM strives to make responsible AI a default practice across its business.

We at Greyhound Research believe IBM’s approach to operationalising governance reflects a deeper maturity in AI productisation. This isn’t just about ethical intent—it’s about executional repeatability. By building FactSheets, embedded reviews, and lineage mapping directly into product workflows, IBM makes trust scalable. For enterprises navigating regulatory complexity and internal silos, that kind of embedded operational hygiene is no longer optional—it’s foundational.

IBM’s flagship AI governance product is IBM watsonx.governance, introduced in late 2023 as part of the watsonx AI/ML platform. It is an integrated toolkit designed to direct, manage and monitor AI across the entire lifecycle. The toolkit connects with an organisation’s existing AI infrastructure to automate governance workflows, saving time and cost while helping comply with regulations.

A core advantage of watsonx.governance is its flexibility – it can govern generative AI and traditional ML models from any vendor or platform, including third-party services like OpenAI, Amazon SageMaker, Google Vertex, etc. This cross-platform capability is crucial because enterprises often use a mix of AI services. Watsonx.governance provides a single control plane, ensuring consistent oversight.

This cross-platform governance layer is now one of the most sought-after traits in enterprise tooling. Sharing an example from Greyhound Fieldnote, in our ongoing advisory work with a Fortune 100 insurance firm, IBM’s ability to manage both homegrown and third-party models in a single workflow dashboard helped streamline their model inventory and reduce regulatory prep time by over 30%.

According to the Greyhound ModelOps Pulse 2025, 67% of enterprise architecture leaders now cite multi-model, multi-cloud support as the most critical success factor in their AI governance stack—well above automation or UI simplicity.

The watsonx.governance suite includes the following technical capabilities:

Model Lifecycle Governance

It automates end-to-end governance from model inception to retirement. Users can register an AI use case, track its development progress, and enforce checkpoints (e.g. ethical review or approval gates) before moving to production. Integrated workflows allow teams to document the intended purpose of a model, its risk level, and whether it uses sensitive data or foundation models. Approval processes (with role-based sign-offs) are built-in, and every step is logged for audit.

This ensures that no model goes live without proper vetting. The system also maintains an inventory of all models, with classification by risk, department, and lifecycle phase, giving a governance dashboard view of AI across the enterprise.

Risk Management and Monitoring

Watsonx.governance provides automated monitoring of models for various risk metrics. It continuously evaluates model health, accuracy, drift, and bias and (for generative AI) checks output quality (e.g. toxicity or hallucination rates). Users can set risk thresholds (e.g. a maximum allowable drift or minimum accuracy) and get alerts or trigger actions when thresholds are breached.

The toolkit integrates with IBM Guardium (a data security solution) to detect security vulnerabilities or policy violations in AI pipelines. For instance, it can discover “shadow AI” – models running without proper approval – by scanning environments for unregistered AI artifacts. Based on these metrics, risk dashboards and scorecards summarise each model’s risk status (green/yellow/red). This allows risk officers to prioritise interventions on models that pose a higher threat.

Compliance Management

A standout feature is the focus on regulatory compliance. Watsonx.governance helps translate regulations into enforceable policies. It comes with policy templates and rule sets that correspond to frameworks like the EU AI Act and NIST. For example, suppose the EU AI Act requires documentation of training data for high-risk models. In that case, Watsonx.governance can prompt users to fill in that info during model registration and prevent promotion to production until it’s provided.

The tool can track upcoming regulatory changes and ensure policies remain up-to-date. It also automates the creation of compliance artifacts – e.g. generating a report or FactSheet for a model that includes all required documentation to satisfy an audit. This greatly reduces the manual burden of compliance. Companies can demonstrate that their AI hiring tool has undergone a bias assessment and has a transparency statement simply by exporting the FactSheet that Watsonx.governance compiled throughout the model’s lifecycle.

AI Factsheets and Transparency

At a technical level, watsonx.governance leverages IBM’s AI FactSheets technology to capture metadata on models automatically. When data scientists train a model, details like training data used, algorithms, hyperparameters, performance metrics, and bias test results are logged into a FactSheet document. This FactSheet travels with the model through deployment. It serves both as an internal transparency mechanism and an external documentation artifact.

The tool can enforce completeness of FactSheets (e.g. require that certain fields are filled before deployment) and can version them as models are updated, providing an audit trail of changes. FactSheets are crucial for accountability, and IBM has made them an integral part of its governance tech stack to ensure “Transparency reinforces trust” in AI.

Integration and Ecosystem

Under the hood, watsonx.governance is not a single monolithic product but a bundle of integrated IBM technologies. It combines IBM OpenPages (which provides the governance, risk, and compliance console and workflow engine), IBM Watson OpenScale (which provides the runtime model monitoring for bias, drift, accuracy, and explainability), and IBM AI Factsheets (from IBM Research, for model documentation).

These components are pre-integrated so that clients get a unified experience. For example, OpenPages (in the watsonx Governance Console) presents a dashboard where a compliance officer can see outputs from OpenScale’s monitors on each model and the FactSheet data – without needing to jump between tools.

The open architecture also means it can ingest data from non-IBM model ops tools via APIs. IBM has also ensured watsonx.governance works in hybrid cloud environments: it can be deployed on IBM Cloud, other clouds, or on-premises, and manage AI assets across all those environments. This is important for enterprises with regulatory constraints on where data/models can reside.

Below is a screenshot of the IBM watsonx.governance console, illustrating an AI use case record with risk level, status, and required data for approval. The interface integrates model details, documentation, and workflow actions to enforce governance.

IBM Watsonx Governance Console Resume Summarization Use-Case | Source: IBM

The example above shows how an AI use case (here, Resume summarisation) is captured in Watsonx.governance. The Risk Level is set to High, indicating special oversight, and the system has fields for Purpose, Stakeholder Departments, use of foundation models, etc. On the right, it shows a checklist (“Use Case Data Gathering”), prompting the user to provide key information before submitting for approval. This interface ensures no high-risk AI project flies under the radar – everything is recorded and requires explicit validation steps.

Bias Detection and Explainability

IBM’s governance tools include advanced capabilities for fairness and explainability. Watson OpenScale (part of watsonx.governance) can detect direct and indirect bias in models by monitoring model outcomes across protected attributes in real-time. It can even apply techniques like automated bias mitigation (e.g., reweighting outputs) in certain cases if it finds bias.

This functionality has become central in industries where fairness is not just ethical—it’s regulatory. Sharing an example from Greyhound Fieldnote, during an engagement with a European retail banking client, IBM’s bias detection tools flagged inconsistencies in mortgage approval patterns across demographic groups. What began as a suspected data imbalance revealed deeper issues in model weighting and training logic, prompting a retraining cycle with stricter thresholds.

According to the Greyhound Model Risk Pulse 2025, 66% of risk and compliance leaders now say that real-time bias detection is a critical capability for AI models deployed in customer-facing decisions—up from just 42% two years ago.

For explainability, OpenScale generates reason codes or feature importance for individual predictions, which can be logged to the FactSheet or provided to end-users for transparency. IBM also offers the AI Explainability 360 toolkit for deeper model interpretability techniques (like SHAP values counterfactual explanations), which can be integrated into Watsonx pipelines.

All these ensure that when AI makes a decision, the governance platform can answer “why did the model do that?” and “is this decision fair?” – crucial questions for accountable AI.

We at Greyhound Research believe IBM’s approach to bias detection and explainability goes beyond academic toolkits—it operationalises accountability in production environments. In sectors like banking, insurance, and healthcare, where decisions must be both defensible and explainable, IBM’s ability to generate human-readable reasoning for AI predictions is a competitive differentiator. As regulatory pressure around fairness intensifies, tools that can detect, surface, and mitigate bias in real-time will become essential—not optional.

Lifecycle Management and ModelOps

From a technical standpoint, IBM’s governance tools dovetail with MLOps (machine learning operations). Watsonx.governance doesn’t just monitor models in production; it also helps orchestrate the retraining or change management process. For instance, it keeps track of model drift – if drift exceeds a set threshold, it can flag that the model should be retrained and even trigger a retraining pipeline (if connected to Watson Studio or another ML pipeline tool). It manages model versioning and ensures that all approvals and documentation are updated when models are replaced or updated.

This level of version control and documentation automation is becoming a central expectation in enterprise MLOps deployments. Sharing an example from Greyhound Fieldnote, in our advisory work with a global manufacturing company, we observed how IBM’s lifecycle tooling helped the client flag model drift early and trigger a retraining pipeline, preventing performance degradation on their predictive maintenance systems.

According to the Greyhound ModelOps Pulse 2025, 62% of enterprise data science leaders now say that automatic model version tracking and retraining triggers are the top capabilities they look for when assessing MLOps tooling.

Integration with DevOps processes means deploying an AI model can be tied to a governance check (e.g., a CI/CD pipeline won’t push a model to production unless Watsonx.governance confirms compliance status green). This integration is vital for scaling AI in large enterprises, preventing “rogue” models from being deployed outside the governed process.

We at Greyhound Research believe IBM’s model lifecycle architecture offers a level of traceability and orchestration that many enterprises are still building toward. Its integration of approvals, audit logs, and retraining triggers into the MLOps pipeline closes a critical governance gap. In an environment where model sprawl and shadow AI are emerging as top concerns, IBM’s approach to lifecycle management is as much about risk containment as it is about operational agility.

Integration with Enterprise System

IBM’s governance tech is designed to plug into existing enterprise IT and data ecosystems. For example, OpenPages can ingest HR or financial systems data if an AI use case needs business context. Watsonx.governance can connect to data catalogues (to automatically fetch metadata about training data) and identity management systems (to enforce role-based access and approvals).

IBM emphasises that the toolkit supports hybrid cloud deployments and integration so that whether an organisation’s AI workloads run on IBM Cloud, AWS, Azure, or on their servers, the governance layer can cover all. This is supported by the product’s ability to deploy “wherever it makes sense across your hybrid cloud”.

This hybrid-first design is increasingly what enterprise buyers expect when evaluating governance tooling. Sharing an example from Greyhound Fieldnote, in a recent engagement with a multinational energy provider, IBM’s ability to plug into both on-premise data infrastructure and AWS-hosted model services helped avoid a costly refactor during a compliance-driven migration. According to the Greyhound CIO Infrastructure Pulse 2025, 71% of enterprise IT leaders now prioritise AI governance solutions that integrate natively with multi-cloud and on-prem environments, citing data residency, access control, and compliance fragmentation as top concerns.

One concrete integration is with IBM Cloud Pak for Data – Watsonx.governance can run as part of CP4D, leveraging its data integration and model deployment capabilities. Also, IBM has partnerships (e.g. with Deloitte and Tech Mahindra ) to integrate Watsonx.governance into broader enterprise solutions, like Deloitte’s AI governance services and Tech Mahindra’s amplifAI suite, extending the reach into client-specific workflows.

We at Greyhound Research believe IBM’s strength lies not just in the power of its tools but in its interoperability across complex enterprise estates. Governance can’t be effective if it only works in isolated pockets. IBM’s commitment to hybrid-cloud integration and ecosystem partnerships ensures its tooling remains enterprise-grade—flexible enough to fit and robust enough to scale.

Greyhound Standpoint

In summary, IBM’s AI governance tools (especially Watsonx.governance) provide a comprehensive technology stack for Responsible AI. They cover model tracking, bias/explainability, risk scoring, compliance automation, and workflow – all configurable to an enterprise’s needs. This gives businesses a practical way to implement the governance principles IBM espouses.

We at Greyhound Research believe IBM’s governance stack—particularly watsonx.governance—demonstrates how operational oversight can be both rigorous and adaptable. Rather than forcing enterprises into toolchain overhauls or vendor lock-in, IBM provides a governance control plane that works across ecosystems, model types, and deployment modes. In a world where AI use is exploding but accountability remains fractured, this open yet deeply integrated architecture is a meaningful advantage.

For instance, if a company adopts IBM’s toolkit, they essentially get an “AI control centre” where, at any given time, they can see how many AI models they have, what each model’s risk profile is, and whether each has met the necessary ethical and regulatory checks. Without such tools, these tasks would be manually intensive (surveys, spreadsheets) and prone to gaps.

IBM’s technology thus operationalises governance at scale. We at Greyhound Research believe IBM’s integration of governance into the technical pipeline of AI development is a differentiator, ensuring that AI governance is not an external bureaucratic process but is embedded in the tools data scientists and engineers use daily.

As compliance becomes more stringent, businesses are under intense pressure to ensure their AI systems are ethical, transparent, and compliant. Stressing risk management, explainability, and human oversight, governments are fast developing and implementing rules that specify how artificial intelligence may be applied. From the EU AI Act to the NIST AI Risk Management Framework in the United States, the regulatory terrain is changing faster than ever.

By matching these rules and actively guiding their evolution, IBM has positioned itself as a proactive leader in artificial intelligence governance. It has also long advocated for precision regulation—a risk-based strategy that controls artificial intelligence depending on its possible harm instead of all-encompassing guidelines for all AI uses.

By partnering with EU legislators, American government agencies, and international projects like the OECD and G7, IBM is helping to set the direction of artificial intelligence policy. Its watsonx.governance system also guarantees transparency, tracks artificial intelligence threats, and automates compliance to enable businesses to negotiate this changing regulatory terrain.

In this section, we examine how IBM matches its governance structure with important rules throughout Europe, the United States, APAC, and worldwide AI policy initiatives, ensuring both IBM and its customers remain ahead of compliance concerns in an increasingly regulated AI environment.

EU AI Act and European Alignment

IBM has been an outspoken supporter of the EU’s risk-based approach to AI regulation and has proactively adjusted its governance to meet emerging EU requirements. The EU AI Act imposes strict obligations (e.g. risk management, transparency, human oversight, accuracy, robustness, and cybersecurity) on “high-risk” AI systems while banning certain harmful uses outright. IBM publicly applauded EU negotiators for focusing the Act on high-risk applications and embedding principles of transparency, explainability, and safety.

This mirrors IBM’s long-advocated concept of “precision regulation” – i.e. regulating AI use cases by risk level rather than blanket rules for all AI. In fact, IBM’s calls for “precision regulation” (since at least 2021) appear to have been realised in the EU Act’s tiered risk framework, which IBM welcomes as a pragmatic approach balancing innovation and accountability.

Even before the Act was enforced, IBM had taken steps to align its practices. IBM signed the European Commission’s AI Pact in 2024 – a voluntary pledge to adopt best practices in line with the EU AI Act. As part of this pledge, IBM is committed to mapping and inventorying its high-risk AI systems and deployments and promoting AI ethics education among employees and clients. These actions build on IBM’s existing governance (which already requires an inventory of AI use cases and FactSheet documentation) and ensure IBM can rapidly identify which systems will fall under the “high-risk” classification in Europe.

IBM also contributes to the regulatory dialogue. IBM’s European Government Affairs director authored pieces on how the Act could be clarified around roles and responsibilities, and IBM provided detailed feedback on the draft law to EU policymakers. IBM emphasises human oversight and fundamental rights impact assessments – both key parts of the EU Act – in its client consulting as well, preparing customers to comply.

Sharing an example from Greyhound Fieldnote, in our regulatory alignment advisory with a European fintech company preparing for high-risk AI classification under the EU AI Act, IBM’s watsonx.governance tooling enabled the automatic flagging of models requiring conformity assessment. This allowed the client to accelerate their readiness process and respond to upcoming documentation mandates without restructuring their model pipelines. According to the Greyhound Regulatory Preparedness Pulse 2025, 58% of European enterprise compliance leaders now cite “in-tool readiness for EU AI Act obligations” as a key procurement driver—particularly the ability to generate real-time technical documentation, audit trails, and risk disclosures.

Notably, IBM’s watsonx.governance tool is marketed as a way to automate transparency and documentation requirements that the EU Act will demand (such as maintaining technical documentation and audit trails for high-risk AI). IBM is effectively baking EU compliance into its products, helping clients generate AI FactSheets and bias monitoring reports to satisfy the Act’s reporting and fairness mandates.

By aligning early, IBM gains trust with EU regulators. For instance, IBM was invited to join the EU’s High-Level Expert Group that crafted the Ethics Guidelines for Trustworthy AI in 2019, and it ensured IBM’s principles (fairness, transparency, etc.) echoed those guidelines. With the EU AI Act expected to roll out soon, IBM is well-positioned to meet its requirements and help shape implementation through continued advocacy.

United States Regulatory Landscape

In the U.S., where AI-specific regulation is still emerging, IBM has actively engaged with policymakers to encourage smart governance frameworks. IBM strongly backed the NIST AI Risk Management Framework (RMF), participating in its development and swiftly aligning IBM’s internal standards with NIST’s voluntary guidelines. IBM lauded the NIST AI RMF as “groundwork for advancing trustworthy AI” and promoted it as a baseline for industry and government use. When a bipartisan U.S. Senate bill proposed to require federal agencies to adopt the NIST AI RMF, IBM publicly endorsed it as a “critical step” to ensure AI used by the government is trusted and responsible.

IBM’s influence is also evident in legislative hearings: In May 2023, IBM’s Christina Montgomery (Chief Privacy & Trust Officer) testified to Congress alongside other AI leaders, urging a “precision regulation” approach at the federal level. She advocated for laws that set rules for AI use-case risk (with stricter rules like transparency and impact assessments for high-risk uses) rather than regulating AI technology. Montgomery also highlighted the need for strong internal governance (she noted any company deploying AI at scale should have an AI ethics lead and an ethics board) to complement regulation. Her testimony aligns closely with IBM’s practices and helped inform lawmakers about corporate AI governance as a model.

IBM has consistently contributed to U.S. policy development on AI. It formally responded to the U.S. Algorithmic Accountability Act proposals and multiple NIST requests for information on AI bias and risk. In late 2023, the White House issued an Executive Order on AI Safety, including mandates on testing AI models and addressing bias; IBM’s existing frameworks (e.g. bias toolkits, AI FactSheets) already address many of these points, and IBM has signalled support for the EO’s goals of AI safety, equity, and privacy. Furthermore, IBM’s Policy Lab publications have argued for federal standards that echo its internal principles, requiring transparency reports for significant AI systems and impact assessments for high-stakes AI (ideas now reflected in proposed legislation and the NIST RMF).

IBM’s stance in the U.S. is often seen as more welcoming of regulation than some Big Tech peers: IBM famously exited the general facial recognition market in 2020 and called on Congress to regulate that technology, citing civil rights risks. This move demonstrated IBM’s willingness to prioritise ethics over certain business opportunities, and it positioned IBM as a trusted voice in discussions on banning or restricting AI uses that pose unacceptable societal risks (which the EU Act also does ). Overall, IBM is aligning its governance with emerging U.S. rules by embracing frameworks like NIST’s and advocating for laws that mandate the kind of internal oversight that IBM already practices.

APAC and International Frameworks

IBM extends its governance strategy to comply with and shape AI policies across the Asia-Pacific region and globally. In APAC, many countries are issuing AI ethics guidelines (e.g. Singapore’s Model AI Governance Framework, Japan’s AI R&D Guidelines). IBM has been actively collaborating with governments and industry in these efforts. For example, IBM is working with the Monetary Authority of Singapore (MAS) to implement an AI governance system for the financial sector. The MAS is collaborating with IBM to deploy a “complete AI lifecycle governance tool” that continuously monitors AI models in the central bank’s operations. This partnership helps meet local regulatory expectations (like Singapore’s emphasis on fairness and explainability in finance) and showcases IBM’s tools in a real-world regulatory sandbox.

Similarly, IBM Research and IBM Consulting are engaged with AI Singapore (a national AI initiative) to integrate governance into Singapore’s first large language model (Project SEA⁃Lion), helping Southeast Asian enterprises scale AI safely and responsibly. In regions like Australia and India, which are formulating AI principles, IBM often contributes via industry groups or public comments, leveraging its global Policy Lab to share best practices.

On the international stage, IBM’s governance approach is closely aligned with the OECD AI Principles – a globally recognised set of AI values endorsed by 40+ countries. IBM was proud to contribute its expertise to the OECD’s development of these principles in 2019. IBM agreed with the OECD’s focus that AI should be fair, explainable, secure, and human-centred, noting that this matches IBM’s guidance to governments. IBM views the OECD principles as providing a consistent global reference that complements region-specific laws, much as OECD’s privacy guidelines preceded data regulations like GDPR.

In line with this, IBM has championed initiatives such as the Global Partnership on AI (GPAI) and the G7’s Hiroshima AI Process, which seek international cooperation on AI governance. In 2023, IBM joined the AI Ethics Initiative at the OECD’s AI Policy Observatory, contributing use cases like IBM’s methodology for AI FactSheets as a resource for broader adoption. IBM’s commitment to global AI safety was also evident when it signed onto the 2024 AI Seoul Summit’s “AI Frontier Safety” commitments, pledging to implement advanced AI safety and security practices in line with that multinational effort. This means IBM will incorporate frontier risk considerations (like controlling runaway AI or misuse of foundation models) into its governance framework – an area of growing importance in global policy discussions.

IBM also aligns with formal technical standards such as those from ISO/IEC. IBM experts participate in ISO committees (for instance, on AI management systems) to help shape standards that emphasise transparency, accountability, and ethics in AI management. The ISO/IEC 23894 and 38507 standards, which guide organisational AI governance, echo many elements IBM already practices (model documentation, risk controls, oversight roles). IBM’s watsonx.governance tooling is designed with flexibility to adapt to various standards – whether it’s tagging models by EU risk tier, generating NIST-compliant risk reports, or measuring metrics for an ISO audit.

In essence, IBM has built a baseline governance framework that maps to multiple regimes: risk-based controls for the EU AI Act, NIST’s trustworthy AI criteria, OECD’s human-rights focus, and sectoral laws (like healthcare AI regulations or financial model risk management rules). By engaging in policy development and frequently updating its internal policies, IBM ensures it stays ahead of regulatory changes. For example, IBM’s AI Ethics Board regularly reviews new laws to update IBM’s policies accordingly. This was seen with privacy: IBM’s global Privacy and AI Management System (PIMS) was created to centrally manage GDPR and other privacy compliance, which is now being extended to AI compliance tracking.

As AI regulations proliferate worldwide, IBM’s strategy is to maintain a common governance core that meets the strictest requirements and then fine-tune for local specifics. This proactive stance avoids compliance crises and allows IBM to influence regulations with on-the-ground lessons learned from implementing AI governance across different industries and cultures.

We at Greyhound Research believe IBM’s proactive participation in regulatory design, combined with embedded tooling support for emerging frameworks like the EU AI Act and NIST RMF, reflects a maturity few vendors can match. In our view, the difference lies not just in publishing whitepapers—but in operationalising those principles inside products. As the regulatory tide rises, IBM’s alignment is no longer a compliance edge—it’s a strategic moat.

The actual measure of success of any AI governance plan is not the precision of the written policies but their real-world impact. While many companies talk about responsible AI, only those who use governance systems and track results can say they are truly responsible. Using AI governance solutions, IBM has transcended theoretical governance debates to provide real-world corporate benefits.

From sports companies improving fairness in decision-making to banking institutions reducing AI bias in credit approvals, IBM’s AI governance approach shows its value in many different sectors. Embedding governance into its internal operations and client interactions, the company guarantees that AI models are compatible with laws and maximised for justice, openness, and responsibility.

This section examines some of IBM’s most noteworthy case studies – whether it is the economic value of AI governance minimising reputational damage, enhancing AI-driven decision-making, or simplifying regulatory compliance. These cases show the practical advantages of integrating governance into AI systems and provide a broad road map for companies applying AI governance.

Enterprise AI Governance in Finance

A notable case study is IBM’s implementation of AI governance for a large global financial services firm. IBM deployed an automated AI governance framework using IBM Cloud Pak for Data components, demonstrating how governance at scale delivers tangible risk mitigation.

In this project, IBM configured a suite of tools – including IBM OpenPages (for governance, risk, and compliance workflows), Watson OpenScale (for model monitoring), Watson Machine Learning and Watson Studio – to create an end-to-end governance platform covering the bank’s entire AI model lifecycle. The solution provided a central inventory of AI models and use cases, with integrated approvals and issue tracking workflows.

As data scientists built models (for credit scoring, fraud detection, etc.), the system automatically captured key facts (data sources, training details, performance metrics) into AI FactSheets and set predefined thresholds for metrics like fairness, quality, and drift for each model. Once models were in production, the platform continuously monitored them: it would trigger alerts if a model’s performance dipped below an acceptable accuracy or if bias metrics exceeded the bank’s risk tolerance.

For example, if a loan approval model started showing a statistically significant uptick in adverse outcomes for a protected group, OpenScale’s bias monitor would flag it, and the system could automatically prompt retraining or send a report to the responsible team. The metrics tracked included accuracy, fairness indices, drift magnitude, and explanation stability, among others.

All of this was done through a unified dashboard and workflow so that model owners, risk managers, and compliance officers had a shared view of model status and could collaboratively address issues (e.g. a risk officer could see that a model exceeded a drift threshold and approve a retraining cycle via the platform).

This level of orchestration is becoming the norm in large-scale deployments where governance needs to span geographies and business units. Sharing an example from Greyhound Fieldnote, in our work with a global financial services firm headquartered in North America, IBM’s integrated tooling helped shift their AI review cycle from quarterly manual audits to live compliance dashboards—reducing decision latency and eliminating spreadsheet-based review bottlenecks. According to the Greyhound Financial Services AI Pulse 2025, 69% of enterprise CIOs and CROs in the sector now prioritise AI platforms that can unify model inventory, compliance logs, and approval workflows in one system.

By implementing this uniform governance, the bank achieved a few key outcomes: it preserved existing AI investments (the platform integrated with the bank’s existing modelling tools), it streamlined audit and compliance reporting (model documentation and metrics were automatically collected for regulators), and it enabled the bank’s C-suite to confidently scale AI usage knowing that risks (operational, regulatory, reputational) were being actively managed across 100s of models.

IBM reports that the bank can now manage AI deployments across business units “with a uniform, integrated and automated platform,” significantly reducing the manual effort previously needed to track model approvals and check compliance in silos. While specific quantitative metrics are confidential, qualitatively, this governance automation has shortened model review cycles and avoided potential compliance penalties by catching issues early.

This case exemplifies how IBM translates its governance principles into measurable business value: the firm avoided unmitigated bias that could harm customers and pre-empted regulatory findings by having evidence of control over its AI models.

Bias Mitigation and Fairness Outcomes

IBM’s AI governance initiatives have led to demonstrable fairness improvements in real-world scenarios. A compelling example comes from the sports domain: the U.S. Open tennis tournament partnered with IBM to use AI for scheduling and match data and leveraged IBM’s governance tools to ensure fairness.

Using watsonx.governance, the U.S. Open detected and reduced bias in how court assignments were being made (historically, certain players may have been systematically favoured on show courts). After IBM’s intervention, the tournament achieved an increase in “court fairness” from 71% to 82%. In other words, decisions about scheduling and resources were measurably more equitable – an 11 percentage point improvement in fairness – as a result of IBM’s AI governance analytics. This is a concrete metric showing AI governance in action: by auditing the AI’s decisions for bias and adjusting algorithms or processes accordingly, the organisation could quantify a positive change (improved fairness index).

Similarly, in financial services, IBM often uses Watson OpenScale to monitor bias in lending models. In a hypothetical Golden Bank use case (based on common client scenarios), data scientists used fairness metrics from OpenScale to ensure a mortgage approval model was treating applicants without discrimination. The tooling allowed them to see if the model’s approval rates differed by race or gender and to adjust the model until the outcomes met fairness thresholds. They could also generate explanations for each credit decision to provide to customers, fulfilling transparency and enabling recourse.

While specific bank names are confidential, IBM has documented that such approaches have helped clients reduce bias in AI decisions by significant margins and pass regulatory fair-lending tests. For instance, one IBM client in insurance used IBM OpenPages and OpenScale to trace and rectify a bias issue that would have affected pricing for certain demographics, potentially avoiding millions in reputational damage and regulatory fines.

These cases underscore that IBM’s governance solutions are not just theoretical – they drive measurable risk reduction, like narrowing performance gaps between groups or preventing flawed AI outputs.

IBM’s Internal PIMS System

IBM’s experience is a case study in implementing AI governance at the enterprise scale. IBM created an internal Privacy and AI Management System (PIMS) to manage compliance with AI ethics and privacy across its vast operations.

This system, deployed company-wide, is built on IBM’s OpenPages GRC platform and IBM Knowledge Catalog and centralises all processes for tracking data and AI compliance. PIMS essentially serves as IBM’s control tower for AI governance: it captures metadata on over 5,500 applications and processes within IBM that involve AI or personal data.

By automating workflows through OpenPages, IBM broke down what used to be siloed regional compliance activities and moved to a globally coordinated model. One outcome is agility – IBM was able to launch a new enterprise-wide AI compliance program in just 6 weeks using PIMS, whereas historically, such rollouts would take far longer due to scattered systems.

PIMS provides a single source of truth for IBM’s AI models and datasets, tracking their status against various regulations and IBM’s own ethics principles. Christina Montgomery (IBM’s Chief Privacy Officer) noted that PIMS helps identify gaps between IBM’s current practices and new regulations “as they come out,” allowing rapid adaptation. This proved invaluable when multiple new AI guidelines were emerging simultaneously; IBM could map each requirement (say, a new transparency obligation from the EU) to internal controls in PIMS and then evidence compliance.

The measurable benefits for IBM include greater efficiency (compliance efforts scaled without proportional headcount increases) and enhanced trust from IBM’s clients and regulators (IBM can demonstrate its house is in order by showing PIMS dashboards and documentation during audits).

IBM’s internal case also illuminated challenges – for example, the Gender Shades audit in 2018 revealed accuracy disparities in IBM’s facial recognition AI for different demographic groups. IBM addressed this by retraining its models (substantially improving accuracy for darker-skinned subjects) and later discontinuing general face recognition offerings to avoid ethical pitfalls.

This responsiveness shows IBM using governance feedback loops to drive product decisions and improve technology. The World Economic Forum, in a 2021 case study of IBM, highlighted these efforts as exemplary, noting IBM’s long-term commitment to “compete on trust” and its willingness to evolve practices when challenges like bias are discovered.

Client Successes and Challenges

Beyond individual cases, IBM’s AI governance work with clients has yielded broad improvements in risk management. IBM Consulting’s responsible AI services have helped organisations with tangible outcomes such as a reduction in model error rates due to early detection of data drift, faster deployment times because ethical review is standardised, and improved stakeholder confidence, leading to higher AI adoption rates.

However, implementing AI governance includes several challenges, including cultural resistance (“data science teams initially saw governance as slowing innovation”) and technical integration issues (connecting disparate model development platforms into one governance system).

IBM addresses cultural challenges by involving cross-functional teams early and demonstrating that governance actually accelerates innovation by building trust (e.g. models get to production with fewer roadblocks when they’ve cleared governance checks).

Technical challenges are addressed by IBM’s focus on open architecture – Watsonx.governance can integrate models “from any vendor” and work across hybrid cloud environments, meaning companies don’t have to rip and replace existing AI pipelines. IBM’s case studies often emphasise this flexibility as the key to success.

In summary, IBM’s client engagements and its own internal use show that robust AI governance can produce real business benefits: higher AI quality, lower risk of compliance violations, improved fairness, and greater stakeholder trust. Each success story also provides feedback for IBM to enhance its frameworks (for instance, seeing how fairness metrics need to be tailored to different contexts), ensuring continuous improvement in IBM’s AI governance practice.

We at Greyhound Research believe IBM’s governance success is best demonstrated not in slide decks, but in outcomes. Whether it’s measurable bias reduction in sports scheduling or risk-controlled AI rollouts in global banks, IBM has shown that Responsible AI isn’t abstract—it’s operational, auditable, and impactful. As enterprises demand proof, not promises, IBM’s case-led approach gives it both credibility and momentum.

We at Greyhound Research believe the depth of IBM’s offerings (like watsonx.governance integrating OpenPages and OpenScale) and IBM’s proven domain expertise in operationalising AI ethics is commendable. However, it does face the challenge of keeping pace with rapid AI advancements. But then that’s true for all vendors in the game.

Compared to other tech giants, IBM has a rather proactive stance on AI ethics. For instance, IBM was one of the first to publish corporate AI ethics principles (in 2017) and to establish an internal AI ethics board (formally launched in 2018), whereas many other companies followed years later. IBM also released multiple open-source AI ethics toolkits (AI Fairness 360, etc.), which set it apart as willing to share methods to tackle bias and explainability. We at Greyhound Research believe these open toolkits have helped establish industry benchmarks for fairness testing and transparency reporting. By making these tools available for all, IBM has influenced best practices beyond its own walls – for example, banks and hospitals that were not IBM clients have still used AIF360 to audit their models, indirectly furthering IBM’s vision of ethical AI.

That said, IBM is not alone in AI governance; companies like Microsoft, Google, and Accenture also have responsible AI programs. IBM’s strength lies in its enterprise focus – it tailors governance solutions to business use cases (like model risk management in finance or compliance in healthcare) more so than consumer internet scenarios. Where IBM may have less presence is in consumer-facing AI governance (for example, policing an online platform for AI-generated content). However, IBM’s niche is clear: providing governance for mission-critical AI in enterprise and government. In this domain, IBM is often considered a leader alongside perhaps Microsoft (which has its own internal Responsible AI Office and some toolkits like FairLearn).

However, IBM does face its fair share of challenges. One challenge is ensuring that governance keeps up with new AI technologies like large language models and generative AI. IBM has addressed this by publishing guidelines for foundation models and releasing the Granite series of AI models with transparency features, but the fast-evolving nature of AI means IBM’s governance framework must remain agile. We also hold the firm belief that no matter how good internal governance is, external validation is crucial – IBM (and others) must undergo independent audits of their AI systems. IBM has started moving in this direction by engaging with certification efforts (it’s involved in the EU’s AI Act discussions on conformity assessments, for example).

We at Greyhound Research believe below are some areas where IBM needs to further its efforts:

Generative AI governance

Ensuring AI ethics principles apply to generative models and foundation models. IBM’s recent Seoul Summit commitments and Granite model transparency work are steps in this direction, but the field is new. IBM will need to continue refining techniques for things like controlling AI-generated content, addressing IP issues, and managing the unique risks of large language models. IBM’s publication of a Responsible AI guide for foundation models in 2024, recognised by Stanford for transparency) is an example of how it’s tackling this.

External accountability

While IBM has strong internal accountability, having external advisory boards or third-party audits could further enhance trust. IBM does collaborate with external stakeholders (it has advisors and participates in multi-stakeholder initiatives), but as regulations move towards requiring third-party conformity assessments for high-risk AI (as in the EU AI Act), IBM and its clients will likely engage more independent oversight.

Scaling to SMEs

IBM’s governance solutions are enterprise-grade; a possible growth area is making governance accessible to smaller organisations or those earlier in their AI journey. IBM Consulting is addressing this by offering workshops and “Governance as a Service” for companies that cannot build everything in-house. Ensuring the balance between governance and innovation is also an ongoing challenge – IBM’s approach must ensure it doesn’t become too bureaucratic. So far, IBM has addressed this by emphasising agile governance (embedding into workflows to streamline rather than block and using automation).

Transparency to end-users

In our view, transparency is a multi-faceted topic, and IBM must go beyond just business-facing transparency (FactSheets) and offer user-facing transparency (labels or disclosures when AI is used). IBM tends to operate in B2B contexts, but as its clients deploy AI-affecting consumers, IBM’s tools might evolve to include features that help clients communicate AI usage and obtain consent from end-users.

Succinctly put, IBM’s long history in trusted enterprise technology (security, compliance, etc.) gives it credibility in AI governance. IBM’s decision to pull back from certain controversial AI applications (like facial recognition) was seen as an ethical stance that bolstered its reputation. Competitors sometimes took longer to make similar moves. This has positioned IBM as a “responsible elder” in AI – a role that brings trust, though IBM must also continue to innovate to stay relevant in the hyper-competitive AI market.

Analyst In Focus: Sanchit Vir Gogia

Sanchit Vir Gogia, or SVG as he is popularly known, is a globally recognised technology analyst, innovation strategist, digital consultant and board advisor. SVG is the Chief Analyst, Founder & CEO of Greyhound Research, a Global, Award-Winning Technology Research, Advisory, Consulting & Education firm. Greyhound Research works closely with global organizations, their CxOs and the Board of Directors on Technology & Digital Transformation decisions. SVG is also the Founder & CEO of The House Of Greyhound, an eclectic venture focusing on interdisciplinary innovation.

Copyright Policy. All content contained on the Greyhound Research website is protected by copyright law and may not be reproduced, distributed, transmitted, displayed, published, or broadcast without the prior written permission of Greyhound Research or, in the case of third-party materials, the prior written consent of the copyright owner of that content. You may not alter, delete, obscure, or conceal any trademark, copyright, or other notice appearing in any Greyhound Research content. We request our readers not to copy Greyhound Research content and not republish or redistribute them (in whole or partially) via emails or republishing them in any media, including websites, newsletters, or intranets. We understand that you may want to share this content with others, so we’ve added tools under each content piece that allow you to share the content. If you have any questions, please get in touch with our Community Relations Team at connect@thofgr.com.


Discover more from Greyhound Research

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from Greyhound Research

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from Greyhound Research

Subscribe now to keep reading and get access to the full archive.

Continue reading