Hadoop Installation Series: How to Start Hadoop and Its Components 🚀

Learn step-by-step how to start Hadoop and its essential components using start-dfs.sh, including NameNode and DataNode setup for a smooth Hadoop environment.

Saqib24x744 views1:19

About this video

The start-dfs.sh command, as the name suggests, starts the components necessary for
HDFS. This is the NameNode to manage the filesystem and a single DataNode to hold data.

The SecondaryNameNode is an availability aid that we'll discuss in a later chapter.

After starting these components, we use the JDK's jps utility to see which Java processes are
running, and, as the output looks good, we then use Hadoop's dfs utility to list the root of
the HDFS filesystem.

After this, we use start-mapred.sh to start the MapReduce components—this time the
JobTracker and a single TaskTracker—and then use jps again to verify the result.

There is also a combined start-all.sh file that we'll use at a later stage, but in the early
days it's useful to do a two-stage start up to more easily verify the cluster configuration.

Video Information

Views
44

Total views since publication

Duration
1:19

Video length

Published
Jun 18, 2014

Release date

Related Trending Topics

LIVE TRENDS

This video may be related to current global trending topics. Click any trend to explore more videos about what's hot right now!

THIS VIDEO IS TRENDING!

This video is currently trending in Morocco under the topic 'météo demain'.

Share This Video

SOCIAL SHARE

Share this video with your friends and followers across all major social platforms including X (Twitter), Facebook, Youtube, Pinterest, VKontakte, and Odnoklassniki. Help spread the word about great content!