Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-02T19:53:35.563Z Has data issue: false hasContentIssue false

High performance computing and computational aerodynamics in the UK

Published online by Cambridge University Press:  03 February 2016

D. R. Emerson
Affiliation:
Computational Science and Engineering Department, CCLRC Daresbury Laboratory, Warrington, UK
A. J. Sunderland
Affiliation:
Computational Science and Engineering Department, CCLRC Daresbury Laboratory, Warrington, UK
M. Ashworth
Affiliation:
Computational Science and Engineering Department, CCLRC Daresbury Laboratory, Warrington, UK
K. J. Badcock
Affiliation:
CFD Laboratory, FST Group Department of Engineering, University of Liverpool, Liverpool, UK

Abstract

The establishment of the UK Applied Aerodynamics Consortium in 2004 brought together many of the UK’s leading research groups to tackle challenging aerodynamic problems on the national computing facility, HPCx. This paper provides a brief history of some early pioneers of numerical simulation and highlights some key contributions to development in parallel processing that laid the foundations for today’s researchers. The transition from vector to massively parallel processing is discussed from a UK viewpoint along with technological barriers that could have a significant impact on future systems. Solutions to these barriers are already being sought and the paper discussed some of the novel technologies that may be deployed in the future. In its short history, the consortium has made substantial progress and this is briefly discussed with several highlights that illustrate the scientific output. Although a number of challenges are identified, particularly with respect to developing a comprehensive visualisation capability, the consortium is well placed to build upon its initial success.

Type
Research Article
Copyright
Copyright © Royal Aeronautical Society 2007 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1. Moore, G.E., Cramming more components onto integrated circuits, Electronics, April 1965, 38, (8).Google Scholar
2. Moore, G.E., Progress in digital integrated electronics, 1975, IEEE International Electron Devices Meeting, pp. 1113.Google Scholar
3. Chapman, D.R., Computational aerodynamics development and outlook, 1979, AIAA Paper 79-0129.Google Scholar
4. Peterson, V.L., Impact of computers on aerodynamics research and development, Proc of the IEEE, 1984, 72, (1), pp 6879.Google Scholar
5. Korkegi, R.H., Impact of computational fluid dynamics on development test facilities, J Aircr, 1985, 22, (3), pp 182187.Google Scholar
6. Johnson, T.J., Tinoco, E.N. and Yu, N.J., Thirty years of development and application of CFD at Boeing commercial airplanes, Seattle, Computers and Fluids, 2005, 34, (10), pp 11151151.Google Scholar
7. Richardson, L.F., The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to stresses in a masonry dam, Phil Trans Roy Soc London Series A, 1911, 210, pp 307357.Google Scholar
8. Kawaguti, M., Numerical solution of the Navier-Stokes equations for the flow around a circular cylinder at Reynolds number 40, J Physical Society of Japan, 1953, 8, (6), pp 747757.Google Scholar
9. Amdahl, G., Validity of the single processor approach to achieving large scale computing capabilities, 1967, AFIPS Conference Proceedings, pp 483485.Google Scholar
10. Emerson, D.R., Introduction to parallel computers: architecture and algorithms, High Performance Computing in Fluid Dynamics, 1996, Wesseling, P. (Ed), Kluwer Academic Publishers, pp 142.Google Scholar
11. Gustafson, J.L., Re-evaluating Amdahl’s law, Communications of the ACM, 1988, 31, (5), pp 532533.Google Scholar
12. Bhatt, S.N. and Leighton, F.T., A framework for solving VLSI graph layout problems, J Computer and System Sciences, 1984, 28, pp 300343.Google Scholar
13. Gary, M., Johnson, D. and Stockmeyer, L., Some simplified NP-complete graph problems, Theoretical Computer Science, 1976, 1, pp 237267.Google Scholar
14. Farhat, C., A simple and efficient automatic FEM domain decomposer, Comp and Struc, 1988, 28, pp 579602.Google Scholar
15. Kernigan, B.W. and Lin, S., An efficient heuristic procedure for partitioning graphs, The Bell System Technical J, 1970, 49, pp 291307.Google Scholar
16. Simon, H.D., Partitioning of unstructured problems for parallel processing, Computer Systems in Engineering, 1991, 2, (2/3), pp 135148.Google Scholar
17. Pothen, A., Simon, H. and Liou, K.-P., Partitioning spares matrices with Eigenvectors of graphs, SIAM J. Mat Anal Appl, 1990, 11, (3), pp 430452.Google Scholar
18. Barnard, S.T. and Simon, H.D., A fast multilevel implementation of recursive spectral bisection for partitioning unstructured problems, Concurrency: Practice and Experience, 1994, 6, (2), pp 101117.Google Scholar
19. Hendrickson, B. and Leland, R., The CHACO user’s guide version 1.0, November 1993, Sandia Report SAND93-2339.Google Scholar
20. Walshaw, C., Cross, M., Everett, M.G. and Johnson, S., Jostle: Partitioning of unstructured meshes for massively parallel machines. Parallel Computational Fluid Dynamics: New Algorithms and Applications, 1995, Satofuka, N. et al (Eds), pp 273280.Google Scholar
21. Karypis, G. and Kumar, V., A fast and high quality multilevel scheme for partitioning irregular graphs, 1995, International Conference on Parallel Processing, pp 113122.Google Scholar
22. Mavriplis, D.J., Aftosmis, M.J. and Berger, M., High resolution aerospace applications using the NASA Columbia supercomputer, 2005, Joint winner Best Paper, SuperComputing 2005.Google Scholar
23. Allen, C.B., Sunderland, A.G and Johnstone, R., Application of parallel rotor simulation tools on HPCx, scheduled for publication in Aeronaut J, March 2007.Google Scholar
24. Nayyar, P., Barakos, G.N. and Badcock, K.J., Numerical study of transonic cavity flows using large-eddy and detached eddy simulation, scheduled for publication in Aeronaut J, March 2007.Google Scholar
25. Richardson, G.A., Dawes, W.N. and Savill, A.M., An unsteady, moving mesh CFD simulation for Harrier hot-gas ingestion control analysis, submitted to Aeronaut J, 2006.Google Scholar
26. Li, Q., Page, G.J. and Mcguirk, J.J., Large eddy simulation of twin impinging jets in crossflow, submitted to Aeronaut J, 2006.Google Scholar
27. Wulf, W. and Mckee, S., Hitting the memory wall: implications of the obvious, Computer Architecture News, 1995, 23, (1).Google Scholar
28. Ashworth, M., Parallel processing in environmental modelling, Parallel Supercomputing in Atmospheric Science, 1993, Proceedings of the Fifth Workshop on the Use of Parallel Processors in Meteorology, Hoffmann, G-R. and Kauranne, T. (Eds), pp 125, World Scientific.Google Scholar
29. Flynn, M.J. and Hung, P., Microprocessor design issues: thoughts on the road ahead, IEEE Micro, 2005, 25, (3), pp 1631.Google Scholar
30. IBM Research Blue Gene Project Page, http://www.research.ibm.com/bluegene/ Google Scholar
31. Lawrence Livermore National Laboratory, http://www.llnl.gov/ Google Scholar
32. Top 500 Supercomputing Sites, http://www.top500.org/ Google Scholar
36. Application acceleration with FPGA-based reconfigurable computing, http://www.cray.com/products/xd1/acceleration.html Google Scholar
37. The Mitrion Software Development Kit, http://www.mitrion.com/msdk.shtml Google Scholar
39. The Cell Project at IBM research, http://www.research.ibm.com/cell/ Google Scholar
41. Gustafson, J.L. and Geer, B.S., A hardware accelerator for the Intel math kernel library, Clearspeed Whitepaper, http://www.clearspeed.com/downloads/Intel%20Math%20Kernel%20whitepaper.pdf Google Scholar