• Journal of Internet Computing and Services
    ISSN 2287 - 1136 (Online) / ISSN 1598 - 0170 (Print)
    https://jics.or.kr/

Research for Efficient Massive File I/O on Parallel Programs


Gyuhyeon Hwang, Youngtae Kim, Journal of Internet Computing and Services, Vol. 18, No. 2, pp. 53-60, Apr. 2017
10.7472/jksii.2017.18.2.53, Full Text:
Keywords: Parallel I/O, Collective I/O, Distributed memory computer, MPI-IO, NFS

Abstract

Since processors are handling inputs and outputs independently on distributed memory computers, different file input/output methods are used. In this paper, we implemented and compared various file I/O methods to show their efficiency on distributed memory parallel computers. The implemented I/O systems are as following: (i) parallel I/O using NFS, (ii) sequential I/O on the host processor and domain decomposition, (iii) MPI-IO. For performance analysis, we used a separated file server and multiple processors on one or two computational servers. The results show the file I/O with NFS for inputs and sequential output with domain composition for outputs are best efficient respectively. The MPI-IO result shows unexpectedly the lowest performance.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from November 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[APA Style]
Hwang, G. & Kim, Y. (2017). Research for Efficient Massive File I/O on Parallel Programs. Journal of Internet Computing and Services, 18(2), 53-60. DOI: 10.7472/jksii.2017.18.2.53.

[IEEE Style]
G. Hwang and Y. Kim, "Research for Efficient Massive File I/O on Parallel Programs," Journal of Internet Computing and Services, vol. 18, no. 2, pp. 53-60, 2017. DOI: 10.7472/jksii.2017.18.2.53.

[ACM Style]
Gyuhyeon Hwang and Youngtae Kim. 2017. Research for Efficient Massive File I/O on Parallel Programs. Journal of Internet Computing and Services, 18, 2, (2017), 53-60. DOI: 10.7472/jksii.2017.18.2.53.