Garden 3 – Inria Data Center

INRIA is the unique French public research center fully dedicated to computational sciences. Researchers of this center collaborate with public and private research unit in France to integrate basic and applied research and to transfer innovative ideas to innovative companies.

In the ECO2Clouds resources, FR-Inria runs OpenNebula, in a version derived from OpenNebula 3.6 for BonFIRE.

Hypervisor used Nodes run XEN 3.2

Image management Inria’s setup has been described in a blog entry on OpenNebula’s blog platform: http://blog.opennebula.org/?author=59

Basically, NFS is configured on the hypervisor of the service machine and mounted on the OpenNebula frontend and on workers. TM drivers are modified to use dd to copy VM images from the nfs mount to the local disk of the worker node (local LV to be precise), and cp to save MV images to NFS. This way, we :

have a efficient copy of images to workers (no ssh tunneling)

may have significant improve thanks to NFS cache

don’t suffer of concurrent write access to NFS because VMs are booted on a local copy

Image storage Images are stored using the ‘’raw’’ format

OpenNebula scheduler configuration: These values are subject to frequent changes. Their meaning can be explored in http://opennebula.org/documentation:archives:rel3.0:schg

-t (seconds between two scheduling actions): 10

-m (max number of VMs managed in each scheduling action ): 300

-d (max number of VMs dispatched in each scheduling action): 30

-h (max number of VMs dispatched to a given host in each scheduling action): 2

Permanent resources

Inria provides 4 dedicated worker nodes (DELL PowerEdge C6220 machines) as permanent resources.

These worker nodes have the following caracteristics:

CPU 2 Intel(R) Xeon(R) CPU E5-2620 @ 2.00GHz, Hyperthreading enabled, with 6 cores each

Memory 64GiB, in 8*8GiB, DDR3 1600 MHz memory banks

Local storage. * 2* 300GB SAS storage.

Network. 2* 1GB/s ethernet linked bonded together

One server nodes with 2+8 disks (RAID1 for system, RAID 5 on 8 SAS 10k 600G disks), 6 cores and 48GB of RAM, 2 cards of 4 Gbps ports to host the different services needed to run the local testbed. Gigabit Ethernet networking interconnections are available between these nodes, with bonds to increase performance (2GB/s on nodes, 4GB/s on the server).

This infrastructure is monitored for power consumption using eMAA12 PDUs from Eaton.

On-request resources

FR-INRIA can expand over the 160 nodes of Grid‘5000 located in Rennes. When using on-request resources of Grid‘5000, BonFIRE users have a dedicated pool of machines that can be reserved in advance for better control of experiment conditions, but nevertheless accessible using the standard BonFIRE API. The interface of the reservation system is documented in the dedicated page

When requesting resources, a description of the nodes made available is shown on the web interface so the user can choose between the 4 available types of nodes. Parapluie nodes are instrumented for power consumption with the same pdus as the permanent infrastructure

 

 

Upcoming Events

There are no upcoming events.

View Calendar

The consortium

  • ATOS SPAIN SA - E (coordinator)
  • University of Manchester - UK.
  • University of Edinburgh - UK
  • Universität Stuttgart - D
  • Politecnico di Milano - I
  • INRIA - F