Best practice for using %%sql magics on python Spark notebooks

Is there a community best practice for running SparkSQL commands through the Python intepreter? I’m using the docker all-spark-notebook and have been looking at adding integration via sparkmagic.

Have others been down this path already? I am a little surprised this isn’t part of the all-spark-notebook image already and wanted to check in to see if there’s another path Spark users are leaning towards.

I just started down this road myself. I am trying to get a GKE jupyterhub cluster to send jobs to a dataproc cluster running livy.

Do you have any updates?