Knowledge of Hadoop/Big data can be considered good to have skills, but if its absolutely necessary for using deep neural network models, I’m not sure. And I guess there is no one answer either.
The absolute need of really large datasets while using Neural Network in itself is debatable. The blog post below gives some intuition how different Deep learning practitioners have different dataset size requirements.
And snippet below is from a very very recent post from Andrew Ng (Stanford/ML-Coursera fame), you might find interesting:
"Lately I’ve been thinking about how to train neural networks on small amounts of data. I try to find quiet time to brainstorm, and sometimes I end up with many pages of handwritten notes. After I’ve obsessed over a problem during the day, before I fall asleep I remind my brain that I want to make progress on it. Then, if I’m lucky, I awaken in the morning with new ideas. "