Hello. You arrived at this page at

Contact us for further info:
Tel: 0207 414 645
Email:
supercom@hotmail.com

TITAN:

Super computer












Titan is a supercomputer built by Cray at Oak Ridge National Laboratory for use in a variety of science projects. Titan is an upgrade of Jaguar, a previous supercomputer at Oak Ridge, that uses graphics processing units (GPUs) in addition to conventional central processing units (CPUs). It is the first such hybrid to perform over 10 petaFLOPS. The upgrade began in October 2011, commenced stability testing in October 2012 and it became available to researchers in early 2013. The initial cost of the upgrade was US$60 million, funded primarily by the United States Department of Energy.

Its computing speed is the equivalent of performing 33,863 trillion calculations per second and is almost double the score achieved by the second most powerful machine: the American Titan supercomputer, which clocked in at 17.59 petaflop/s.

Titan employs AMD Opteron CPUs in conjunction with Nvidia Tesla GPUs to improve energy efficiency while providing an order of magnitude increase in computational power over Jaguar. It uses 18,688 CPUs paired with an equal number of GPUs to perform at a theoretical peak of 27 petaFLOPS; in the LINPACK benchmark used to rank supercomputers' speed, it performed at 17.59 petaFLOPS. This was enough to take first place in the November 2012 list by the TOP500 organization, but Tianhe-2 overtook it on the June 2013 list.

Titan is available for any scientific purpose; access depends on the importance of Super computerthe project and its potential to exploit the hybrid architecture. Any selected code must also be executable on other supercomputers to avoid sole dependence on Titan. Six vanguard codes were the first selected. They dealt mostly with molecular scale physics or climate models, while 25 others queued behind them. The inclusion of GPUs compelled authors to alter their codes. The modifications typically increased the degree of parallelism, given that GPUs offer many more simultaneous threads than CPUs. The changes often yield greater performance even on CPU-only machines.

Read more...

Development

Plans to create a supercomputer capable of 20 petaFLOPS at the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory (ORNL) originated as far back as 2005, when Jaguar was built.[2] Titan will itself be replaced by an approximately 200 petaFLOPS system in 2016 as part of ORNL's plan to operate an exascale (1000 petaFLOPS to 1 exaFLOPS) machine by 2020.[2][3][4] The initial plan to build a new 15,000 square meter (160,000 ft2) building for Titan, was discarded in favor of using Jaguar's existing infrastructure.[5] The precise system architecture was not finalized until 2010, although a deal with Nvidia to supply the GPUs was signed in 2009.[6] Titan was first announced at the private ACM/IEEE Supercomputing Conference (SC10) on November 16, 2010, and was publicly announced on October 11, 2011, as the first phase of the Titan upgrade began.[3][7]

Jaguar had received various upgrades since its creation. It began with the Cray XT3 platform that yielded 25 teraFLOPS.[8] By 2008, Jaguar had been expanded with more cabinets and upgraded to the XT4 platform, reaching 263 teraFLOPS.[8] In 2009, it was upgraded to the XT5 platform, hitting 1.4 petaFLOPS.[8] Its final upgrades brought Jaguar to 1.76 petaFLOPS.[9]

Titan was funded primarily by the US Department of Energy through ORNL. Funding was sufficient to purchase the CPUs but not all of the GPUs so the National Oceanic and Atmospheric Administration agreed to fund the remaining nodes in return for computing time.[10][11] ORNL scientific computing chief Jeff Nichols noted that Titan cost approximately $60 million upfront, of which the NOAA contribution was less than $10 million, but precise figures were covered by non-disclosure agreements.[10][12] The full term of the contract with Cray included $97 million, excluding potential upgrades.[12]

The yearlong conversion began October 9, 2011.[13][14] Between October and December, 96 of Jaguar's 200 cabinets, each containing 24 XT5 blades (two 6-core CPUs per node, four nodes per blade), were upgraded to XK7 blades (one 16-core CPU per node, four nodes per blade) while the remainder of the machine remained in use.[13] In December, computation was moved to the 96 XK7 cabinets while the remaining 104 cabinets were upgraded to XK7 blades.[13] The system interconnect (the network over which CPUs to communicate with each other) was updated and the ORNL's external ESnet connection was upgraded from 10 Gbit/s to 100 Gbit/s.[13][15] The system memory was doubled to 584 TiB.[14] 960 of the XK7 nodes (10 cabinets) were fitted with a Fermi based GPU as Kepler GPUs were not then available; these 960 nodes were referred to as TitanDev and used to test code.[13][14] This first phase of the upgrade increased the peak performance of Jaguar to 3.3 petaFLOPS.[14] Beginning on September 13, 2012, Nvidia K20X GPUs were fitted to all of Jaguar's XK7 compute blades, including the 960 TitanDev nodes.[13][16][17] In October, the task was completed and the computer was finally christened Titan.[13]

Conscious Mind' quotes: #1 '

"How many people, families, and nations were saved and protected due to the Conscious Mind, and how many were ruined due to an absence of it?!" Abu Ubayd

news flash