The University of Ljubljana in Slovenia hosted the third annual EHPCSW18 and fifth annual PRACEdays18 events which opened with a plenary session on May 29, 2018. The conference was chaired by PRACE Council Vice-Chair (June 2016-June 2017) Sergi Girona (Barcelona Supercomputing Center), and officially opened by PRACE Managing Director and Chair of the EHPCSW & PRACEdays OPC Serge Bogaerts. An official welcome was provided by PRACE Council Chair Anwar Osseyran (Dutch National HPC Center, SURFsara); and Minister of Education, Science and Sport Maja Makovec Brenčič.
Monday opened with the EXDCI workshop. Tuesday began with general session keynotes and split into one industrial and five scientific parallel tracks, with three workshops organized by EHPCSW partnering organizations and projects. In late afternoon, everyone reconvened for general sessions. Wednesday and Thursday featured a similar format with four parallel workshops each day, and Friday was reserved for private project meetings.
Wednesday’s keynote was by Allan Williams, the Associate Director for Services and Technology at the Australian National Computational Infrastructure (NCI). His talk, “Building a world-class national high-performance e-infrastructure for Australian research and innovation,” chronicled the history of Australia’s national cyberinfrastructure (CI). “We’re finally to the point where our program is research-driven and focused on national priorities, but it hasn’t always been that way,” he said.
Australia’s seven states comprise a land mass larger than Europe. Historically, centers sprung up ubiquitously from universities in several states beginning with the first at the Australian National University in Canberra in 1987. The co-investment between states and government wasn’t always well synced. “You may have found compute or storage at one location, but rarely both,” he said. It took years of trial and error before a fully-functioning federated program was established.
Two time zones separate their flagship national high-performance computing (HPC) centers. NCI in Canberra, the nation’s capital, is on the east coast, and Pawsey Supercomputing Center is in Perth on the west coast. Flying distance between them is 3,088 kilometers, or 1,919 miles. Travelling by car with an average speed of 70 mph takes more than 30 hours. But high-speed networks have closed geopolitical gaps, enable high-availability of resources and ensure the prospect of disaster recovery.
The two flagship centers, which once competed for government funding, are now collaborating and developing a joint strategy for the nation. Their leadership is collaborating to develop a more strategic plan to benefit Australian researchers. While there were lean years—zero capital funding in 2012, for example—they’ve managed to progress by innovating and developing integrated cloud, data and HPC services. In 2016, the Australian government asked Australia’s Chief Scientist Dr. Alan Finkel to develop a roadmap outlining the infrastructure needs for Australian researchers. After careful consideration, the government recently released their response to the roadmap, with funding support for an AU$1.9 billion for research infrastructure investments over the next 12 years. This boost in funding under the National Collaborative Research Infrastructure Strategy (NCRIS) scheme will see an initial investment over the next five years of AU$140 million for renewal of the nation’s HPC infrastructure between the two centers. In addition to HPC funding, the government has committed to a strategy of reviewing the roadmap every two years to ensure that it continues to align with researcher needs. The need for tightly integrated services has been recognized, and in addition to the AU$140 million for HPC, another AU$72 million for a shared cloud and storage research platform has been allocated to support the national priority research areas.
Pawsey Supercomputing Centre Acting Director Ugo Varetto also attended PRACEdays18 and said, “Australian researchers have access every year to over 250 Million CPU hours awarded purely on scientific merit though our HPC National Merit Allocation Scheme (NCMAS).” The scheme gives researchers access to The Pawsey Supercomputing Centre flagship system Magnus, NCI’s Raijin, MASSIVE at Monash University in Melbourne and the University of Queensland’s multi-node cluster, Flashlite. All are anchored by major HPC centers and interconnected via the Australian Academic and Research Network (AARNet). “The demand for HPC resources was three times over-subscribed last year, which demonstrated the endless thirst of Australian researchers for high-performance compute resources and the need to grow,” Varetto continued.
Increasingly it’s not just HPC services in isolation, but the integrated workflows of big compute, big data and data analysis in the cloud that are required by researchers to deliver scientific outcomes. However, to get the most out of these resources, dedicated expertise within the HPC computer centers is needed to assist researchers in developing and optimizing software codes for fast IO and integrated workflows.
Both facilities provide researchers across the country with access to large supercomputing resources. Pawsey currently has 45 staff members who support 1,500 researchers and 170 projects in domains such as radio astronomy, energy, engineering, bioinformatics and health sciences. One of the more notorious projects supported by the Australian centers is the Square Kilometer Array (SKA); one of the greatest scientific instruments of our time. With more than 60 personnel who help more than 4,000 researchers, NCI supports the full gamut of research, with a focus on climate, weather, earth system sciences and bioinformatics. National research data collections exceed 20 petabytes and serve as regional hubs for many international data sets, such as the Copernicus and Sentinel satellite data.
Australia, like other federated CIs, struggles to recruit and retain skilled personnel. “It’s especially challenging in Canberra since it’s one-fourth the size of Perth, but there is high demand for HPC and data analytical skills from the government and industry sectors,” said Williams. “We need to continually ‘grow our own,’ and are keen to benefit from lessons learned by other CIs that have built successful student programs,” he said. “Unfortunately, it has been difficult to organize teams to compete at the International Supercomputing Conference in Germany, or at the annual Supercomputing Conference in the United States because Australian university student exams fall in July and November,” he added.
“Both Australian centers welcome collaborations with industry partners and federated CIs,” said Varetto. Plans to participate in staff exchanges; visiting scholar programs; shared Tier-0 systems with United States, European Union and Japanese programs; innovative allocation strategies; group procurements; technical exchanges; shared student strategies; online courses; and more are on the horizon. A step towards realizing this was achieved recently when Pawsey signed an MOU with PRACE. “We hope that through close collaboration we can leverage this agreement nationally as we move toward a unified national HPC strategy,” he added.
The distance ‘down under’ doesn’t matter as much as it used to. The Australian Access Federation supports eduroam which facilitates access among 40 similar federations around the world, including the U.S. and most European countries. “AARNet is working with GÉANT and other providers to alleviate transoceanic network bottlenecks, so we expect international data transfers to improve in all directions, but most recently toward Singapore,” said Williams. Those who wish to learn more about AARNet, and how it peers with other high-speed networks in the region may wish to attend the Asia Pacific Advanced Network (APAN’18) conference in Auckland, New Zealand August 5-9, and QUESTNet’18 September 26-28 in Cairns, Australia.
For more information about Australia’s federated CI, visit the NCI and Pawsey websites.
Read additional news about PRACEdays18 in HPCwire. Meanwhile, mark your calendars for #PRACEdays19 and #EHPCSW19 in Poznan, Poland, May 13-17, 2019.
About NCI
NCI, Australia’s national high-end research computing service, is a partnership between the Australian National University, CSIRO, the Australian Bureau of Meteorology, Geoscience Australia, research intensive universities and consortia support by the Australian Research Council, and medical research institutes. NCI is supported by the Australian Government through the National Collaborative Research Infrastructure Strategy (NCRIS), with a large fraction of its operations sustained by co-investment from the partner organizations. Through its tightly-coupled, high-performance computing and data platforms, overlaid with internationally renowned expertise in computational science, data science and data management, NCI provides essential services that underpin the requirements of research and industry, today and into the future. For more information, please visit www.nci.org.au.
About Pawsey Supercomputing Centre
The Pawsey Supercomputing Centre is a world-class high-performance computing facility representing Australia’s commitment to the solution of Big Science problems. The facility provides researchers across the country, access to one of the largest supercomputers in the Southern Hemisphere. Pawsey is currently serving over 80 organisations and achieving unprecedented results, in domains such as radio astronomy, energy and resources, engineering, bioinformatics and health sciences. The Centre is focused on providing integrated research solutions by giving users simultaneous access to world-class expertise and infrastructure in supercomputing, data, and visualisation services. Pawsey is funded by the Western Australian State Government and the Australian Government National Collaborative Research Infrastructure Strategy (NCRIS). For more information, please visit www.pawsey.org.au.
About PRACE
The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class high performance computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 5 PRACE members (BSC representing Spain, CINECA representing Italy, ETH Zurich/CSCS representing Switzerland, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU’s Horizon 2020 Research and Innovation Programme (2014-2020) under grant agreement 730913. For more information, see www.prace-ri.eu.
About the Author
HPCwire Contributing Editor Elizabeth Leake is a consultant, correspondent and advocate who serves the global high performance computing (HPC) and data science industries. In 2012, she founded STEM-Trek, a global, grassroots nonprofit organization that supports workforce development opportunities for science, technology, engineering and mathematics (STEM) scholars from underserved regions and underrepresented groups.
As a program director, Leake has mentored hundreds of early-career professionals who are breaking cultural barriers in an effort to accelerate scientific and engineering discoveries. Her multinational programs have specific themes that resonate with global stakeholders, such as food security data science, blockchain for social good, cybersecurity/risk mitigation, and more. As a conference blogger and communicator, her work drew recognition when STEM-Trek received the 2016 and 2017 HPCwire Editors’ Choice Awards for Workforce Diversity Leadership.