Data sovereignty has become a non-negotiable requirement for financial institutions operating in regulated environments. As AI-powered compliance tools handle increasingly sensitive transaction data, the question of *where* that data lives — and who controls it — has moved from a technical preference to a regulatory and security imperative.
We're excited to announce that Vera AI is now available for on-premise deployment, bringing full data sovereignty to your blockchain compliance operations.
## What Is On-Premise Deployment?
On-premise deployment means running Vera AI entirely within your own infrastructure — your servers, your network, your data center. Unlike cloud-based SaaS deployments where transaction data travels to and is processed on third-party servers, on-premise deployment ensures that sensitive financial data never leaves your controlled environment.
This is not a stripped-down version of Vera AI. The on-premise offering includes the full capability stack:
- Real-time transaction risk scoring across all supported blockchain networks
- AI-powered anomaly detection and pattern recognition
- AML and sanctions screening with the complete Defy intelligence database
- Case management and workflow automation
- Regulatory reporting generation
- REST API and webhook integration with your existing systems
## Why On-Premise Matters for Financial Institutions
### Data Residency and Regulatory Compliance
Many jurisdictions have strict requirements about where financial data can be processed and stored. Turkey's KVKK, the EU's GDPR, and similar data protection frameworks impose obligations on financial institutions regarding cross-border data transfers.
For institutions operating in Turkey, MASAK compliance often requires maintaining complete control and auditability of data processing systems. On-premise deployment eliminates ambiguity entirely — data is processed where you can point to it and prove it.
For institutions in the EU, GDPR's restrictions on international data transfers become simpler to manage when processing happens entirely within your own infrastructure. There is no need to rely on Standard Contractual Clauses or conduct Transfer Impact Assessments for your compliance processing.
### Confidentiality of Compliance Intelligence
Your compliance operations contain sensitive intelligence: which wallet addresses you're flagging, which patterns trigger your alerts, how you've calibrated your risk thresholds. In a cloud deployment, this intelligence exists — in processed form — outside your organization's direct control.
On-premise deployment keeps your compliance strategy entirely confidential. Competitors, bad actors studying your detection patterns, and even third-party infrastructure providers cannot observe what your Vera AI installation is doing.
### Air-Gapped Security Options
For the most security-sensitive deployments — central banks, custody providers, regulated exchanges in high-risk jurisdictions — Vera AI's on-premise model supports fully air-gapped configurations. In an air-gapped setup, the Vera AI instance has no external network connectivity. Intelligence database updates are delivered via secure offline transfer, and no transaction data ever traverses an internet connection.
### Deep Integration with Existing Security Infrastructure
On-premise deployment allows Vera AI to integrate with your existing security stack in ways cloud deployments cannot match:
- **SIEM integration**: Vera AI events flow directly into your Security Information and Event Management system
- **HSM key management**: Encryption keys managed by your Hardware Security Modules
- **LDAP/Active Directory**: User access via your existing directory infrastructure
- **Network segmentation**: Vera AI operates within your existing network security zones
- **Audit logging**: All activity logs feed into your existing audit infrastructure
## Deployment Architecture
Vera AI on-premise is containerized and Kubernetes-orchestrated, deployable across a range of infrastructure configurations.
### Minimum Production Requirements
For deployments handling up to 1 million transactions per day:
- **Compute**: 16 vCPU, 64 GB RAM
- **Storage**: 2 TB NVMe SSD (hot), 20 TB HDD (archive)
- **Network**: 10 Gbps internal
- **GPU (optional)**: NVIDIA A100 for accelerated inference at high volumes
### High Availability
Production deployments support active-active HA with:
- Multiple inference nodes behind a load balancer
- Synchronous database replication
- Automated failover with sub-30-second RTO
- Zero-downtime rolling updates
## Intelligence Updates Without Compromising Security
Vera AI addresses the challenge of keeping intelligence databases current while maintaining strict data controls:
1. Defy publishes signed intelligence package updates (typically daily)
2. Updates are downloaded through a one-way transfer mechanism
3. Cryptographic signatures are verified before installation
4. Updates apply without restarting production services
For air-gapped environments, packages are delivered on physical media and verified offline.
## Compliance Benefits at a Glance
| Requirement | Cloud SaaS | Vera AI On-Premise |
|---|---|---|
| Data residency guarantee | Depends on cloud region | Full — data never leaves your infrastructure |
| Air-gapped operation | Not available | Supported |
| HSM key management | Limited | Full support |
| SIEM integration | API-based | Direct event streaming |
| Cross-border data transfer | Required | Not applicable |
| Audit trail control | Shared with provider | Entirely yours |
## Getting Started
The on-premise deployment process:
**Week 1–2**: Infrastructure assessment and capacity planning
**Week 3**: Kubernetes cluster setup and staging deployment
**Week 4**: Integration testing — APIs, monitoring, alerting
**Week 5**: Production deployment with parallel run
**Week 6+**: Full cutover and ongoing support
Financial institutions should not have to choose between cutting-edge AI compliance capabilities and full control over their data. Vera AI on-premise delivers both.
Contact us to schedule a technical architecture review for your on-premise deployment.