Flav wrote on Mar 12
th, 2013 at 4:20pm:
Fucking hardware cost...
I have what I think is an accurate idea of infrastructure involved in the Live server....
Lets just say that :
- 1 Oracle Cluster ( 2 powerful nodes at least )
- 1 EMC² Symetrix ( to hold the several terabytes of data )
- 1 SAN Infrastructure ( to connect the Symetrix to the Oracle Cluster and the Game Cluster )
- 1 Load Balancer for the Oracle Cluster
- 1 dedicated back end network infrastructure ( Gigagyte net, probably fiber)
- 1 Game Cluster ( around 100/200 blades, probably more )
- 1 Billing/accounting/authentication cluster ( several nodes .... 5/6 as it's common infrastructure )
- 1 Load Balancer infrastructure for the game server.
- 1 Front End Network ( common to all the servers )
Should be IMFO representative of what we call one single game server. ( say G-Land )
The number of blades in thee Game Cluster can be dynamically allocated to another server if needed... Eventually the Symetrix can be shared if they went for a big one, but it add complexity at the SAN and back end network.
not many companies can waste several hundred thousand dollars ( or euros ) for a test system.
And the few that can ( and do ) are usually way bigger than Turbine. ( lets say : in Telco : ATT, Verizon, BT, FT, DT, Telia, Vodafone, ... Ericsson, Alcatel-Lucent, Nokia-Siemens, Huawei, ZTE, ... HP, IBM, Oracle, SAP, ... )
Do you really need all of that stuff? Seriously.
Not trying to be an arsehole, just my business background of course is from the school of of course you have something to test live on. And there's still hilarity to telll. Am I missing something in the software industry here? (obviously capriciously losing your digital stuff in a videogame is a degree smaller than a FS company going tits up, but don't you build a test system in as a basic?
(I think I need to accept I don't know this world and shut up. But I am still in bafflement). If nothing else wouldn't you want the mirror system to blow up the exploits? Help me out here....