Classified networks, critical infrastructure, and high-security environments cannot use cloud-based AI services. Data sovereignty requirements prohibit external API calls. Yet these environments need AI-powered analysis more than most.
LogZilla solves this problem with fully on-premises AI capability. The same natural language queries, threat analysis, and remediation commands available in cloud-connected environments work identically in air-gapped networks.
The Air-Gap Challenge
Traditional AI services require internet connectivity. Large language models run in cloud data centers. Every query sends data to external servers. This architecture is incompatible with:
- Classified government networks (IL4, IL5, IL6)
- Defense contractor environments (CMMC Level 2+)
- Critical infrastructure (power, water, transportation)
- Healthcare systems with strict data residency
- Financial institutions with regulatory constraints
These organizations need AI capability without cloud dependency.
On-Premises AI Architecture
LogZilla integrates with Ollama, an open-source framework for running large language models locally. The architecture requires no external connectivity after initial deployment.
text[Log Sources] → [LogZilla] → [Ollama/LLM] ↓ ↓ [Storage] [AI Analysis] ↓ [Reports & Alerts]
All components run within the network boundary:
- LogZilla: Log ingestion, storage, and query processing
- Ollama: Local LLM inference engine
- AI Models: Llama, Mistral, or Mixtral running on local hardware
Zero external API calls. Zero data exfiltration risk. Full AI capability.
Supported AI Models
LogZilla works with multiple open-source models through Ollama:
| Model | Parameters | Use Case | Hardware Requirements |
|---|---|---|---|
| Llama 3 8B | 8 billion | General analysis | 16 GB RAM, GPU optional |
| Llama 3 70B | 70 billion | Complex analysis | 64 GB RAM, GPU recommended |
| Mistral 7B | 7 billion | Fast inference | 16 GB RAM, GPU optional |
| Mixtral 8x7B | 47 billion | High accuracy | 48 GB RAM, GPU recommended |
Model selection depends on hardware availability and analysis complexity. All models provide natural language query capability with varying response quality and speed.
Deployment Scenarios
Forward Operating Base (FOB)
Tactical environments require ruggedized hardware with minimal footprint. LogZilla tactical appliances deploy in Pelican case form factors:
- MIL-STD-810G shock and vibration rated
- Extended temperature range (-20C to +55C)
- 120V/240V AC or 12V/24V DC power options
- Integrated UPS for power continuity
- SSD storage for reliability
A single tactical appliance provides full LogZilla and AI capability for deployed units.
Shipboard Operations
Naval vessels require systems that operate independently for extended periods. LogZilla shipboard deployments include:
- Rack-mounted 2U form factor
- Redundant power supplies
- TEMPEST compliance options
- Integration with shipboard networks
- Autonomous operation without shore connectivity
SCIF Environments
Sensitive Compartmented Information Facilities require strict isolation. LogZilla SCIF deployments provide:
- Complete air-gap with no external interfaces
- Cross-domain solution integration where authorized
- Audit logging for all access and queries
- Role-based access control
- Encryption at rest and in transit
Data Center Deployment
Standard enterprise data centers deploy LogZilla on commodity hardware:
- VMware, Hyper-V, or bare metal installation
- Kubernetes deployment for scale
- Standard 1U/2U rack servers
- Integration with existing infrastructure
Compliance Alignment
Air-gapped LogZilla deployments support multiple compliance frameworks:
CMMC (Cybersecurity Maturity Model Certification)
- Level 2: Controlled Unclassified Information (CUI) protection
- Level 3: Advanced persistent threat protection
- No cloud dependencies that complicate certification
FedRAMP
- Supports FedRAMP High baseline controls
- On-premises deployment eliminates cloud authorization requirements
- Maintains audit trail for continuous monitoring
NIST 800-53
- AC (Access Control): Role-based access, audit logging
- AU (Audit and Accountability): Comprehensive event logging
- SC (System and Communications Protection): Encryption, network isolation
- SI (System and Information Integrity): Log analysis, anomaly detection
Impact Levels
- IL4: Controlled Unclassified Information
- IL5: CUI and National Security Systems
- IL6: Classified information (with appropriate accreditation)
Hardware Requirements
Minimum Configuration (Small Deployment)
- CPU: 8 cores
- RAM: 32 GB
- Storage: 1 TB SSD
- GPU: Optional (CPU inference available)
- Network: 1 Gbps
Supports approximately 100 GB/day log ingestion with AI analysis.
Recommended Configuration (Enterprise)
- CPU: 32 cores
- RAM: 128 GB
- Storage: 10 TB NVMe
- GPU: NVIDIA A100 or equivalent
- Network: 10 Gbps
Supports approximately 1 TB/day log ingestion with fast AI inference.
High-Performance Configuration
- CPU: 64+ cores
- RAM: 256+ GB
- Storage: 50+ TB NVMe array
- GPU: Multiple NVIDIA A100/H100
- Network: 40 Gbps+
Supports multi-TB/day ingestion with real-time AI analysis.
Installation Process
Phase 1: Hardware Preparation
- Provision server hardware per requirements
- Install base operating system (RHEL, Rocky, Ubuntu)
- Configure network interfaces
- Apply security hardening per site requirements
Phase 2: LogZilla Installation
- Transfer LogZilla installation package via approved media
- Run installation script
- Configure storage and retention policies
- Validate log ingestion from test sources
Phase 3: Ollama and Model Setup
- Install Ollama from approved package
- Transfer AI model files via approved media
- Load models into Ollama
- Validate model inference
Phase 4: Integration
- Configure LogZilla AI backend to use local Ollama
- Test natural language queries
- Validate report generation
- Configure user access and permissions
Operational Considerations
Model Updates
AI models improve over time. Air-gapped environments update models through approved media transfer processes:
- Download updated model on connected system
- Transfer via approved cross-domain solution or physical media
- Load new model into Ollama
- Validate functionality before production use
Performance Monitoring
Monitor AI inference performance to ensure adequate response times:
- Query latency metrics
- GPU utilization (if applicable)
- Memory consumption
- Queue depth during peak usage
Capacity Planning
Plan for growth in both log volume and AI query frequency:
- Log retention requirements
- Concurrent user estimates
- Query complexity trends
- Hardware refresh cycles
Model Selection for Air-Gapped Environments
Choosing the right AI model for air-gapped deployment involves balancing capability, performance, and hardware constraints.
Model Comparison for Security Analysis
| Model | Security Analysis Quality | Response Time | Memory Usage |
|---|---|---|---|
| Llama 3 8B | Good | 2-5 seconds | 8 GB |
| Llama 3 70B | Excellent | 15-30 seconds | 40 GB |
| Mistral 7B | Good | 2-4 seconds | 7 GB |
| Mixtral 8x7B | Very Good | 8-15 seconds | 26 GB |
Recommendations by Use Case
Real-time alerting and triage: Llama 3 8B or Mistral 7B provide fast responses suitable for interactive analysis. Analysts receive answers in seconds rather than waiting for larger models.
Comprehensive incident reports: Llama 3 70B or Mixtral 8x7B produce more detailed analysis with better reasoning. Use these for scheduled reports or complex investigations where response time is less critical.
Resource-constrained environments: Mistral 7B offers the best capability-to-resource ratio. Tactical deployments with limited hardware benefit from this model's efficiency.
GPU Acceleration
GPU acceleration dramatically improves inference performance:
| Configuration | Llama 3 8B | Llama 3 70B |
|---|---|---|
| CPU only (32 cores) | 15-20 seconds | 2-3 minutes |
| NVIDIA RTX 4090 | 2-3 seconds | 20-30 seconds |
| NVIDIA A100 | 1-2 seconds | 10-15 seconds |
| 2x NVIDIA A100 | <1 second | 5-8 seconds |
For environments requiring fast interactive analysis, GPU acceleration is strongly recommended. CPU-only inference works but limits query throughput.
Security Considerations for On-Premises AI
Data Isolation
On-premises AI ensures complete data isolation:
- Log data never leaves the network boundary
- AI queries process entirely locally
- No external API calls or telemetry
- No cloud provider access to sensitive data
This isolation satisfies requirements that prohibit cloud AI services for classified or sensitive data.
Model Integrity
Verify AI model integrity before deployment:
- Obtain models from trusted sources (Meta, Mistral AI official releases)
- Verify cryptographic signatures where available
- Test model behavior against known prompts
- Document model provenance for audit purposes
Query Logging
Log all AI queries for audit and compliance:
- User identity for each query
- Query content and timestamp
- Response summary
- Resource consumption
Query logs provide accountability and support incident investigation if AI outputs require review.
Access Control
Implement role-based access to AI capabilities:
- Restrict AI query access to authorized users
- Separate read-only and administrative roles
- Integrate with existing identity management
- Enforce multi-factor authentication where required
Comparing Cloud vs. On-Premises AI
| Factor | Cloud AI | On-Premises AI |
|---|---|---|
| Data sovereignty | Data leaves network | Data stays local |
| Latency | Network dependent | Local only |
| Availability | Internet required | Fully autonomous |
| Model updates | Automatic | Manual transfer |
| Cost model | Per-query pricing | Fixed infrastructure |
| Compliance | Complex authorization | Simplified |
For organizations with data sovereignty requirements, on-premises AI is the only viable option. The trade-off is manual model management and infrastructure investment.
Micro-FAQ
Can AI log analysis work without internet connectivity?
Yes. LogZilla uses Ollama to run AI models locally. All processing occurs on-premises with zero external network dependencies. The system functions identically in air-gapped environments.
What AI models work in air-gapped deployments?
LogZilla supports Llama 2, Llama 3, Mistral, and Mixtral models through Ollama. Models download once during initial setup and run entirely locally thereafter.
Does air-gapped AI have reduced capability?
No. On-premises AI provides identical functionality to cloud-connected deployments including natural language queries, threat analysis, MITRE mapping, and remediation commands.
What compliance frameworks support air-gapped AI?
Air-gapped LogZilla deployments align with CMMC, FedRAMP, NIST 800-53, and IL4/IL5 requirements. No data leaves the classified network boundary.
Next Steps
Organizations requiring AI-powered log analysis in air-gapped environments can deploy LogZilla with on-premises AI capability. The architecture provides identical functionality to cloud-connected deployments while maintaining complete data sovereignty. Contact LogZilla for a technical briefing on air-gapped deployment options.