Faster I/O

Default I/O is sometimes too slow for large data. Use buffered reads/writes or fast parsers to avoid timeouts.

You may encounter programming problems that instruct you to use “faster I/O.” This usually means the test data is large enough that default input/output routines (which have per-call overhead) may cause time limits to be exceeded. Use buffered reads/writes or language-specific fast parsers so your algorithm, not I/O, determines runtime.

How Do You Use Faster I/O?

The approach depends on the language; below are concise, practical patterns for C++, Python, and Java.

C++

Add these lines at the start of main():

ios_base::sync_with_stdio(false);
cin.tie(NULL);

Call ios_base::sync_with_stdio(false) before any I/O. This disables synchronization between C++ iostreams and C stdio, removing extra overhead. Also prefer ‘\n’ over endl to avoid flushing:Call ios_base::sync_with_stdio(false) before any I/O. This disables synchronization between C++ iostreams and C stdio, removing extra overhead. Also prefer ‘\n’ over endl to avoid flushing:

cout << x << '\n';

When appropriate, use C stdio (printf/scanf) for speed:

printf("%d\n", x);

Python

Use buffered reading and writing instead of input() and print() for large I/O.

Fast input (read entire stdin once and parse):

import sys

data = sys.stdin.buffer.read().split()
# example: iterate tokens as ints
it = iter(data)
n = int(next(it))
arr = [int(next(it)) for _ in range(n)]

Fast output (accumulate and write once):

out_lines = []
out_lines.append(str(answer))
# add more lines...
sys.stdout.write("\n".join(out_lines))

If you need streaming output but faster than print():

sys.stdout.write(f"{value}\n")

For line-by-line reading (faster than input()):

for line in sys.stdin.buffer:
    # process line (bytes); decode if needed: line.decode()

Notes:

  • Use .buffer to work with bytes (faster). Convert tokens to int when needed.
  • For many small outputs, accumulate in a list and join to avoid many sys.stdout.write calls.

Java

Use BufferedInputStream / custom fast scanner for input and BufferedWriter or StringBuilder for output.

Fast input (custom fast reader):

import java.io.IOException;
import java.io.InputStream;

class FastScanner {
    private final InputStream in = System.in;
    private final byte[] buffer = new byte[1 << 16];
    private int ptr = 0, len = 0;

    private int read() throws IOException {
        if (ptr >= len) {
            len = in.read(buffer);
            ptr = 0;
            if (len <= 0) return -1;
        }
        return buffer[ptr++];
    }

    int nextInt() throws IOException {
        int c;
        while ((c = read()) <= ' ') if (c == -1) return Integer.MIN_VALUE;
        int sign = 1;
        if (c == '-') { sign = -1; c = read(); }
        int val = 0;
        while (c > ' ') {
            val = val * 10 + (c - '0');
            c = read();
        }
        return val * sign;
    }

    String next() throws IOException {
        int c;
        while ((c = read()) <= ' ') if (c == -1) return null;
        StringBuilder sb = new StringBuilder();
        while (c > ' ') {
            sb.append((char)c);
            c = read();
        }
        return sb.toString();
    }
}

Fast output (BufferedWriter or StringBuilder):

import java.io.BufferedWriter;
import java.io.OutputStreamWriter;
import java.io.IOException;

BufferedWriter out = new BufferedWriter(new OutputStreamWriter(System.out));
// or accumulate:
StringBuilder sb = new StringBuilder();
sb.append(answer).append('\n');
// ...
out.write(sb.toString());
out.flush();

Simpler alternative for input: use java.util.Scanner only for small input; for large input, prefer the custom FastScanner or use java.io.BufferedReader with StringTokenizer:

BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
StringTokenizer st = new StringTokenizer(br.readLine());

Notes:

  • Always flush/close BufferedWriter at the end.
  • Avoid System.out.println in tight loops; use BufferedWriter or build a single large string.

August 3, 2019

Discussion

Spread the Word

Table of Contents

Subscribe to Toph Blog