Trust & Security

Built on principles, not promises

Trust is earned through consistent action, not marketing claims. This page explains how we handle your data, make decisions about AI, and hold ourselves accountable.

Our Approach

How we think about trust

Trust verification is consequential work. These principles guide how we approach it.

Precision over speed

We prioritize accuracy in every verification process. This sometimes means our results take longer, but we believe correct answers matter more than fast guesses.

Human oversight

Automated systems handle scale, but humans make final decisions on edge cases. Our verification specialists review flagged results before they reach you.

Documented methodology

Every trust score comes with an explanation. You can see which data sources contributed and how we weighted each factor.

Data Privacy & Security

Your data, protected

We treat customer data with the same care we would want for our own. These are not aspirational goals—they are how we operate today.

Encryption at rest and in transit

All data is encrypted using AES-256 when stored and TLS 1.3 when transmitted. This is the same standard used by financial institutions.

Data minimization

We collect only what we need to provide the service you requested. We do not harvest data for unrelated purposes.

Access controls

Internal access to customer data is limited to employees who need it for their specific job function. All access is logged and audited.

Retention limits

We retain data only as long as necessary for the stated purpose. You can request deletion of your data at any time.

Compliance: TrustIQ maintains SOC 2 Type II certification and complies with GDPR, CCPA, and other applicable data protection regulations. Our latest audit report is available upon request.

Responsible AI

How we use artificial intelligence

AI is a tool, not a replacement for judgment. Here is how we use it and where we draw boundaries.

What AI does in our system

  • Pattern recognition

    AI helps identify patterns across large datasets that would be impractical to analyze manually.

  • Data aggregation

    AI consolidates information from multiple verified sources into coherent profiles.

  • Anomaly detection

    AI flags inconsistencies that warrant human review, improving the efficiency of our verification process.

What AI does not do

  • Make final determinations

    AI produces scores and recommendations. Humans make final decisions on consequential matters.

  • Access data beyond scope

    Our AI systems only process data relevant to the specific verification request.

  • Train on your data

    Customer data is not used to train our AI models. Your information stays yours.

Transparency & Accountability

We explain our work

A trust company that hides its methods is a contradiction. Here is how we hold ourselves accountable.

01

Clear documentation

Our methodology documentation explains how we calculate trust scores, what data sources we use, and how different factors are weighted. No proprietary black boxes.

02

Incident disclosure

If we experience a security incident that affects your data, we will notify you within 72 hours with a clear explanation of what happened and what we are doing about it.

03

Dispute process

If you believe a trust assessment is inaccurate, you can request a review. We will explain our reasoning and correct errors when they occur.

Our Commitments

What we will never do with your data

Some things are not negotiable. These commitments are built into our contracts, our systems, and our culture.

Sell your data to third parties

Your data is not a product. We do not sell, rent, or trade customer information to advertisers, data brokers, or anyone else.

Use your data for unrelated purposes

Data collected for verification stays within that context. We do not repurpose your information for marketing, profiling, or other unrelated activities.

Train AI models on customer data

Your queries and data are not used to improve our AI systems. Model training happens on separate, anonymized datasets.

Share data with government agencies without legal process

We require valid legal process (subpoena, court order, or warrant) before disclosing customer data to any government entity. We notify affected customers when legally permitted.

Retain data longer than necessary

When data is no longer needed for the service you requested, we delete it. We do not keep information indefinitely 'just in case.'

Make claims we cannot verify

If we are uncertain about a data point, we say so. We do not present speculation as fact or inflate confidence scores to appear more accurate than we are.

Have questions about our practices?

trust@trustiq.com
Built with v0