Services Hire Developers Pricing About Blog Case Studies Book Free Consultation →
Security & Trust

Security by default, built into every project, not bolted on later

The controls below are how we operate. They apply to systems we build for you, infrastructure we operate on your behalf, and the internal tools our team uses every day.

AES-256 at rest TLS 1.3 in transit RBAC + MFA SAST / DAST 13-month audit logs 90-day disclosure

Data classification

Every dataset we touch on a project is tagged with one of four classifications. The classification drives the controls applied: storage location, encryption keys, access scope, retention, and logging.

ClassExamplesDefault controls
PublicMarketing copy, public API docs, press releasesNo restrictions, integrity controls only.
InternalArchitecture diagrams, internal runbooks, team handbooksRBAC, signed-in access, encrypted at rest.
ConfidentialClient source code, business data, customer recordsRBAC, MFA, AES-256 at rest, TLS 1.3, full audit log.
RestrictedPII, PHI, payment data, secrets, credentialsPer-project key, just-in-time access, hardware MFA, immutable log, no LLM exposure.

Encryption and key management

At rest: AES-256 encryption is mandatory for all production data. Database volumes (RDS, Cloud SQL, managed Postgres), object storage (S3, GCS, Azure Blob), block storage (EBS, persistent disks), and backups are encrypted with provider-managed keys at minimum, customer-managed keys (CMK) on request.

In transit: TLS 1.3 is the default. We disable TLS 1.0 and 1.1 on every endpoint we deploy. HSTS, secure cookies, and modern cipher suites are configured by default. Internal service-to-service traffic uses mTLS where possible.

Key management: AWS KMS, GCP KMS, or Azure Key Vault depending on your cloud. Keys are rotated annually by default, on-demand on request. Per-project key separation, never shared across tenants.

Secrets: Application secrets live in HashiCorp Vault or AWS Secrets Manager. They are never committed to source control, never written to plain log files, never shared over email or chat. Pre-commit hooks and CI scanners (TruffleHog, gitleaks) block accidental commits.

Access control and identity

Role-based access control (RBAC). Every system we build ships with RBAC by default. Permissions are role-based, not user-based, and roles are documented in the architecture document so you always know what each role can do.

Single sign-on (SSO). SAML 2.0 and OIDC are supported on every web application we build. We integrate with Okta, Microsoft Entra ID, Google Workspace, Auth0, and most other identity providers.

Multi-factor authentication (MFA). MFA is mandatory for any account with production access, both inside our team and for client admin accounts we provision. Hardware keys (YubiKey) are preferred over TOTP for restricted-class systems.

Just-in-time elevated access. Nobody on our team has permanent production admin. Production access is granted on request through a logged approval workflow, expires automatically (typically 4 hours), and is fully audited. Standing admin permissions are an audit finding, not a feature.

Offboarding. When an engineer rolls off a project, their project access is revoked the same business day. SSO group membership, repository access, cloud IAM, and VPN are all keyed to the SSO identity, so deprovisioning a single identity removes everything.

Audit logging

Every authentication event, privileged action, data export, configuration change, and admin action is logged. Logs are written to an append-only immutable store (CloudWatch Logs with object-lock, or your SIEM of choice) and are tamper-evident by design.

Default retention is 13 months, sized for annual audits with overlap. Longer retention is configurable per project. Logs can be shipped in real time to your SIEM (Splunk, Datadog, Elastic, Sumo Logic) on request, including via Kinesis Data Firehose, Pub/Sub, or syslog.

Log access is itself logged. If anyone reads or queries an audit log, that read is recorded. This sounds obvious but is often missed.

Sub-processor list

The vendors below are the third parties that may process client-related data on our behalf, either as part of a delivery engagement or for our own internal operations. We sign a Data Processing Agreement (DPA) with every sub-processor that handles confidential or restricted data. We notify clients of material changes to this list.

Sub-processorPurposeRegionDPA signed
Amazon Web ServicesCompute, hosting, managed databases, storageClient choice (UK, EU, US, AP)Yes
CloudflareCDN, WAF, DDoS protection, edge DNSGlobal edgeYes
GitHubSource code hosting, CI/CD, code reviewUSYes
SentryError tracking, application performance monitoringUS or EU (per project)Yes
LinearProject management and issue trackingUSYes
NotionInternal documentation, runbooks, knowledge baseUSYes
SlackInternal and client communicationUSYes
Google WorkspaceEmail, calendar, internal documentsUS / EUYes
CalendlyConsultation schedulingUSYes
FormspreeMarketing form submissions (this site)USYes

We do not place client production data into Notion, Linear, or Slack. Communication tools carry project metadata and conversations, never customer records.

Penetration testing and code scanning

Annual third-party penetration test. Our own corporate environment and a representative client production environment are tested annually by an independent CREST or OSCP-credentialled firm. A summary attestation letter is available on request under NDA. Per-project pen tests are quoted as an add-on for client systems.

Static analysis (SAST). Every pull request is scanned with Semgrep using OWASP Top 10 and CWE Top 25 rule sets. Findings of high severity or above block the merge until resolved or explicitly accepted.

Dynamic analysis (DAST). OWASP ZAP runs against every staging deploy on a scheduled basis. Headers, common injection points, and authentication flows are probed. New high findings open a P1 ticket automatically.

Software composition analysis (SCA). Snyk and Trivy scan dependencies, container images, and infrastructure-as-code. Critical CVEs trigger an immediate patch sprint. Dependabot and Renovate keep base versions current.

Secret scanning. TruffleHog and gitleaks run on every push and on git history weekly. Any leaked credential is treated as a P1 incident.

Vulnerability disclosure

If you believe you have found a security issue in any RG INSYS-operated system or in software we maintain on behalf of a client, please email security@rginsys.com. PGP key available on request. We acknowledge reports within 1 business day, triage within 5, and operate a 90-day responsible disclosure window before any public discussion.

We do not pursue legal action against good-faith researchers who follow responsible disclosure norms. We credit reporters in our security advisories on request.

Business continuity and disaster recovery

For production systems we operate on your behalf the default targets are:

  • RTO (Recovery Time Objective): 4 hours.
  • RPO (Recovery Point Objective): 1 hour.
  • Backups: automated, encrypted, daily full plus continuous WAL/PITR for managed Postgres. Stored in a separate AWS account or KMS scope.
  • Multi-AZ: default for production tier. The application can lose an entire availability zone without going down.
  • Cross-region replication: available on request, typically for restricted-class workloads or regulated industries.
  • DR drill: we run a recovery exercise twice a year per managed project. Findings feed back into runbooks.

Where the client operates production directly, we hand over runbooks, IaC, and a documented DR plan during stabilisation. We can be on-call to support a real DR event under a separate retainer.

Incident response

We follow a documented incident response runbook based on NIST SP 800-61. Phases: prepare, detect, contain, eradicate, recover, post-mortem.

Notification. Confirmed incidents that affect client data are notified to the client within 24 hours of confirmation. The notification includes what we know, what we do not yet know, the systems involved, and the immediate containment steps taken.

Post-mortem. A written post-mortem is delivered within 5 business days of containment. It is blameless, root-cause focused, and includes corrective actions with owners and dates. Clients are welcome to attend the post-mortem call.

Tabletop exercises. The internal team runs a tabletop exercise every quarter. Scenarios include credential leak, ransomware on a developer laptop, a cloud account compromise, and a sub-processor outage.

Employee security

Background checks. Every full-time team member completes an identity, education, and prior-employment check before they are issued a corporate identity or any client access.

Endpoint security. Every laptop is corporate-issued, full-disk encrypted (FileVault on macOS, BitLocker on Windows), and managed via Jamf or Microsoft Intune. EDR (CrowdStrike or Defender for Endpoint) runs on every device. USB mass storage is blocked by policy. Patches are pushed automatically.

Separation of devices. Personal devices are not used for client work. Personal email and personal cloud storage are blocked from client repositories at the SSO and DLP layer.

Security training. Every team member completes onboarding security training and a quarterly refresher. Phishing simulations run on a rolling basis. Engineers complete additional secure-coding modules covering OWASP Top 10 and AI-assisted coding pitfalls.

Acceptable use. Documented acceptable-use policy, signed at onboarding and re-attested annually. Covers shadow IT, password hygiene, AI tool usage, and incident reporting duties.

AI and LLM data handling

We are an AI native team and we use LLM coding assistants every day. We treat the data that flows through those assistants with more care than any other class of data, because once it leaves your environment it is hard to call back. Our standing rules:

  • No client code is shared with third-party model providers without explicit written consent. Default LLM access is configured against zero-data-retention enterprise endpoints (Anthropic Claude API ZDR, OpenAI Enterprise no-training tier, GitHub Copilot Business with code-reference filtering).
  • Prompt redaction. PII, secrets, and known restricted-class fields are stripped from prompts before they leave the developer machine. We use a maintained allow-list of fields per project.
  • Auditability. Every LLM call made against client repositories is logged with prompt hash, model, timestamp, and user. The log is reviewable by the client on request.
  • Local LLM option. For restricted-class workloads (PHI, payment data, defense, intelligence) we offer an on-premise or in-VPC deployment of open-weight models (Llama, Mistral, Qwen) so no prompt leaves the client environment.
  • Per-project keys. Each project has its own API keys and its own usage budget. Keys are rotated quarterly and immediately on any suspected compromise.
  • No training on client data. Provider terms are negotiated to ensure client prompts are not used to train, fine-tune, or improve provider models. We have signed addenda from our primary providers confirming this.
  • Human review. Every line of AI-generated code is reviewed by a senior engineer before merge. AI does not push to production directly.
Free consultation, no commitment

Want to dig deeper?

Talk to a senior engineer. We will walk through any control in detail and answer security questionnaire items live on the call.

Chat with us on WhatsApp