In terms of Hadoop mapreduce framework, I am convinced that it is worth to catch OutOfMemoryError and be able to recover from it. Let me illustrate:
Here, we have typical dummy mapper. For exemplary purposes we introduced variable bigArray to be blamed for OutOfMemoryError.
Now, let's answer few questions:
- Is it abnormal when system throws OOME?
Yes. However, when we are dealing with user-generated content, we might not always secure ourselves from it.
For example: in social networks, you have 99.99% with 10-20 MB of statistics data per account, and 0.01% of fake/misused/flash-mobbed accounts that barely fit into 200-300 MB. Handling those is a challenge.
- How to prevent system from meltdown?
First of all - keep all declarations, all references inside the try block. When OOME is thrown, JVM will run GC and clean up all objects referenced within your try block. This way you can proceed.
- Is it OK to skip 0.01% of problematic data?
This is polemic, however I am convinced that Yes.
As Tom White put it "In an ideal world, your code would cope gracefully with all of these conditions. In practice, it is often expedient to ignore the offending records" 
To summarize above, let me also present screenshot from real-world mapper. It shows that within 274'022 records we have 9 causing OutOfMemoryExceptions, or 0.000033%.
|Figure 1: real-world mapper output|
 Stack Overflow
 Tom White: Hadoop: The Definitive Guide, second edition
Chapter 6, page 185 "Skipping Bad Records"