Running a Jupyter Notebook is as simple as executing
jupyter notebook, assuming you have the
pip install jupyter.
Doing this in a different environment (a Hadoop cluster) is basically the same. It requires a little bit more of sys admin experience (not much if you use the right tools) but problems really start when you don't have admin access to the cluster nodes.
The "problem" with the parcel is that it only includes the libraries it doesn't manage services (start, stop, restart). So the parcel is great if you already have a notebook server (maybe a multi-user Jupyter Hub server) that has access to the cluster and all you are missing is libraries for your Spark job.