Page tree


DataBank is launching a BMaaS product that will be deployed as a low-cost POD based structure to various site specific DCs that have space, power and we have demand. 

  • These PODs will be remotely managed, and self-contained to allow DataBank to deploy them quickly, in any of our Data Centers with a minimal customer commitment. 
  • DataBank will allocate two switches per rack with a single rack deployment to ensure HA but will determine if it's more cost-effective to cross cable racks/switches once a second rack is added to reduce per rack overhead. 
  • DataBank will deploy these racks in the least coveted locations (hot isles and older out way places) within our facilities to efficiently utilize space. 
  • Racks will NOT need to be next to each other but ideally, they will be deployed in pairs. 
    • These racks will be cabinets to obfuscate the mixture of legacy hardware in use. 
  • All servers will have redundant power supplies and all racks will have redundant power feeds. 
  • DataBank will utilize these locations as "hardware graveyards" to provide an alternative to hardware recycling when a customer decommissions a system. 
    • Hard drives WILL be reused on this platform (as far as compliance goes – this cannot be a use once and destroy platform). 
    • DataBank will track hard drive utilization and replace them based upon usage. 
    • (Existing graveyard systems will ALL have their hard drives replaced since we do not know their age or health). 
    • This effort will not only provide a cost-effective compute solution to our existing colocation customers but also a GREEN one that will help with our environmental initiatives. 
    • All hardware will be treated as "cattle" not pets. 
  • When a piece of hardware fails, the customer will be provided like or better hardware within the platform to restore their workload to.
    • Assuming the customer has purchased backups or have their own backup system in place.
    • DataBank will need to determine how systems will be backed up cost effectively to allow these PODs to remain cost effective and to increase their available locations. 
  • Locations without existing or customer sponsored SAN will only offer local disk storage. 
    • DataBank WILL offer storage upgrades on hot swappable storage PRIOR to server deployment via ORDER (not through platform).
    • DataBank will NOT offer storage upgrades on live servers due to our lack of skillset in all regions to perform this work. 
    • DataBank will NOT offer CPU or RAM upgrades on existing servers. 
  • Custom configurations not available via inventory can be custom ordered through Sales due to the massive inventory requirements to stock all permutations of hardware required AND the lack of skillset in all regions to perform this work. 
  • BareMetal will ONLY offer 10GB networking at launch but can scale to 25GB if it is available at the location. 
    • Sizeable deals (over 50K MRR) will be considered with higher network requirements. 

DataBank Common Standard Builds


SM - R650 (small offering)

LG - R750 (large offering)

CUSTOM - R750 with denser memory or more drives etc.


SM - Dual Intel Xeon Silver 4309Y 2.8G, 8C/16T, 10.4GT/s, 12M Cache, Turbo, HT (105W) DDR4-2666

LG - Dual Intel Xeon Gold 6338 2.9G, 32C/64T, 11.2GT/s, 48M Cache, Turbo, HT (205W) DDR4-3200


SM - 8GB RDIMM, 3200MT/s, Single Rank

LG - 32GB RDIMM, 3200MT/s, Dual Rank 16Gb BASE x8


SM/LG - 2x M.2 480GB (RAID 1) BOSS Hard Drives


SM - 2x 8TB 7.2K RPM SATA 6Gbps 512e 3.5in Hot-plug Hard Drives

LG - 4x 8TB 7.2K RPM SAS ISE 12Gbps 512e 3.5in Hard Drive

LG - 4x 960 SSD/12 8 TB HDD (SAS) onboard boot option


Customers should be able to order SOME additional managed services on BareMetal at launch:

  • Customer Notified Monitoring (Phase 2?)
  • Any other Managed Services

Monitored by DataBank

  • BMaaS POD Infrastructure Switches

  • BMaaS POD Infrastructure Firewalls

  • BMaaS POD Out of Band Management Devices

  • BareMetal Server Hardware Alerts

  • BareMetal Server Hard Drive Failures

BareMetal is planned to be deployed in the following locations:

  • IAD1 will have 2 cabinets to start.  IAD1 will be deployed with 52 new DELL state of the art, current server technology, servers for sales to sell, customers to beta, etc. 
    • 16 Large Servers
    • 32 Small Servers

Persistent Storage

  • While not a requirement for launch, we have heard from many potential customers that persistent storage is critical to their business and rather than asking them to solve for this with multiple additional bare metal systems and JBODS, we're going to investigate lower cost, in-rack storage options for this purpose.  (Ref. Dell)
  • OBS - Or Object Storage can be added on.


  • PODs can launch without SPINE switching to save $, but after 2 racks, we need to implement spine switching which will add approximately $X dollars to the solution so we need to make sure that once we scale beyond 2 racks the numbers still work.
  • 256 pods
  • @ 8 racks per pod
  • 38 servers per rack

Is scaling plan

Other possible deployment locations: 

  • DFW3 (Popularity/FedRAMP)
  • MSP2 (FedRAMP)
  • BWI1 - R430, 620,720,730s may be available for use. 

Primary Provisioning & Support Processes: 

 Client Provisioning  

  1. Create User Portal Account(s) as required. 
  2. Verify Account Creation and Functionality
  3. Create Client Account inside MetalSoft Controller
  4. Flag Client Account in MetalSoft software as ‘Billable’ 
  5. Configure Duo Two Factor Authentication for Client Portal Access for new accounts.
  6. Create Client Infrastructure
  7. Identify Machines that match client order.
  8. Assign machines to Client Infrastructure
  9. Document Process and Verify Documented Process Steps

 Response to Client Inbound Support Call: 

  1. Gather client contact information.
  2. Obtain Server Location and Serial/Service Tag
  3. Determine if Client Request is In Scope:
  4. Networking/Connectivity Issues 
  5. Hardware Failure
  6. Deployment Failures Caused by MetalSoft Orchestration
  7. For Networking/Connectivity Issues, escalate/reassign ticket to network support team.
  8. For Hardware Failures, complete ‘Hardware Failure’ process (below) 
  9. For Deployment Failures caused by MetalSoft Orchestration, escalate to MetalSoft support (get phone number and support link, provide a document for client email, ie Welcome to DataBank Metal)
  10. Document Process and Verify Documented Process Steps

Response to Client Hardware Failure 

  1. Identify Machine Based on Client Provided Information
  2. MetalSoft Server ID Number
  3. S/N – which is the Dell Service Tag
  4. Manually change the machine Status to ‘Defective’
  5. Identify an available Server with same or better specifications.
  6. Assign replacement Server to client Infrastructure.
  7. Contact Dell Support to initiate service call.
  8. Open internal ticket to get DC Ops to admit Vendor Tech
  9. Document Process and Verify Documented Process Steps

Reclaim and Wipe Machine After End of Client Contract or after Vendor Repair of Hardware Failure  

  1. Validate that Client Contract is Terminated
  2. Ticket should be Associated with Termination Of Service (TOS) Order
  3. Navigate to Server Status Page
  4. Decommission and Clean Server 
  5. Verify that cleaning process completes successfully and Cleaned Server Shows in Available Status
  6. Document Process and Verify Documented Process Steps

Secondary Maintenance Processes: 

Configuration Documentation 

  1. Update Network and Dataflow Diagrams
  2. Validate and publish SHI produced Rack Elevations

 Security and Compliance  

  1. Complete Vulnerability Assessment of Production Environment
  2. Review Security Controls placed on POC Environment
  3. Define Permissible Connection Types to Other DataBank Services (if any)
  4. Update network flow and controls diagram package
  5. Implement Security Controls on Production Environment
  6. Rerun Vulnerability Assessment to Confirm Control Efficacy

 Deploy New Pod Location 

  1. Create Internal Order for New MetalSoft Agent (proxy) 
  2. Escalate to MetalSoft to deploy Site-specific Agent
  3. Coordinate with DC Operations to get new pod moved from transit/crates to assigned position on floor. 
  4. Open ticket to arrange for PDU Connection to A and B power.
  5. Open Ticket to arrange for cross connection to Bare Metal Managed Internet
  6. Verify connectivity of MetalSoft Master Console to Managed Site Equipment
  7. Management plane Firewalls
  8. Production switches
  9. Managed Servers
  10. Verify proper operation of Zero Touch Provisioning and that pre-racked servers are visible in the MegaSoft Master console.
  11. Document Process and Verify Documented Process Steps

 Metal Server Maintenance Processes 

  1. Test/Document Patching/Resetting Server BIOS
  2. Test/Document Server Firmware Updates/Patches
  3. PERC
  4. NICs
  5. iDRAC
  6. Document Process and Verify Documented Process Steps

Switch Maintenance  

  1. Validate/Document Implementation of Spine Switches
  2. Validate Switch Patch Process
  3. Validate Switch Major Upgrade Process
  4. Confirm MegaSoft software operation survives upgrades.
  5. Document Process and Verify Documented Process Steps
SLA Agreements                                                                                                                                              

Bare Metal SLA Agreements

For Databank Metal Servers, DataBank guarantees the functioning of all server hardware components it provides and shall make available a replacement server that has reasonably similar specifications as the failed server at no additional cost. On the Databank Metal platform, Databank will not provide replacement components, but available “spare” servers to be utilized in place of a failed system or system with failed components. The replacement will be initiated as soon as hardware is determined to be the cause of the problem. The new hardware shall become available to the Customer within four (4) hours. Databank may choose, based on environment and solution design, to utilize “hot spares” in the event of a failure to ensure availability and performance of the system continues until appropriate replacements can be procured. 

A failed server is defined as a server that has experienced the loss of a component that has rendered the server as unusable as determined by Databank Operations. This determination will be made once the customer has opened a ticket with Databank support and Databank Operations teams have confirmed the server as “failed”, once this update has been placed in the ticket by Databank Operations teams, the 4-hour replacement countdown will begin. 

The following SLA applies to Dedicated Hardware:

Availability % 

Credit against Monthly Charges 











  • No labels