PyQtGraph Maintainers Seeking Input Regarding User Survey

Hello Jupyter discourse community,

My name is Ogi Moore, I am one of the maintainers of pyqtgraph. This topic is a continuation of an email chain that @carreau suggested we migrate over here.

For the sake of context, PyQtGraph is a plotting library that aims to bridge the gap between numpy and the Qt framework. Where the library really accelerates in is in interactivity and performance; while it can be used to generate publication type plots, there are other options in the PyData ecosystem that fill that need far better.

Over the past few years, we have been clearing out our PR queue, and now it is manageable enough that we would like input from our user-base with regards to what functionality should be added to the library, and what barriers exist making the library difficult to adopt in the first place.

While talking about this subject with tacaswell (not tagging as new discord users can only tag 2 people in a post), he suggested I reach out to @carreau who in turns suggested I get in touch with @isabela-pf as she was involved with creating the previous Jupyter survey.

@carreau mentioned that there will be an upcoming NumFocus survey coming out in 2022, but I think we would like to create and publish the survey sometime in the next month or two. In the future, if we can bundle our survey with the Scientific Python survey, we would like to take advantage of that (given that there is likely significant overlap been our user-base and the recipients of the scientific python survey).

With that, I welcome any input/lessons-learned sort of thing regarding the creation of a user-survey.

Thanks for your time!
Ogi

1 Like

mentioned that there will be an upcoming NumFocus survey coming out in 2022

I meant it would be great to have, I’m not sure their will be. And that it could be nice to go through numfocus, as they have I belive a typeform subscription.

If I understand correctly, one of the questions you raised was also how to reach as many people as possible.

I think there is no issue to promote such a survey with the jupyter twitter account, and we can ask NumFOCUS to link to it in one of their newsletter.

I believe that a common would also reach more people.

As I also outlined in the private exchange, it is also good to do regular survey to get a baseline and see how things are evolving.

cc @tacaswell

@carreau Reach of the survey became a concern once I started requesting input/advise, and a common theme in the feedback I got was not so much the content of the survey but the reach itself. Given how little I know about our user-base, this is very much a valid concern I didn’t give much thought to initially. We would be grateful for any help in this regard from NumFOCUS or Jupyter members!

I meant it would be great to have, I’m not sure their will be.

My mistake! I agree with the sentiment, that would be great to have. If there is something I can do to assist in this effort, I would be happy to do so.

it is also good to do regular survey to get a baseline and see how things are evolving.

While we didn’t consider this initially, I’m starting to think that might be a direction we go. We have a number of active maintainers and contributors, and we’re not moving very fast, but there is a lot of work out there to do, and giving our user-base the opportunity to periodically guide us here is a good thing.

Hi there! I’m still looking for any recorded info on how decisions were made for the Jupyter survey at the end of last year/start of this year. I think a lot of them were made in meetings without specific enough documentation to be useful for you (though you can find some in this issue or in minutes across issues) and/or have been overwritten, unfortunately, so I’ll do my best to give some tips for now since you have shorter timeline.

Personally, I think the most important thing to start with is knowing exactly what you want to investigate in this survey. We went through a few goals in the Jupyter one, and it produced some extremely different draft questions and surveys. If you start with a clear scope, I think it becomes easier to

  • Write stronger and more specific questions

  • Review what questions fit the survey best

  • Produce a shorter survey (which helps increase the number of participants completing it)

  • Produce a survey with a coherent theme and results

It sounds like you have a clear goal with

we would like input from our user-base with regards to what functionality should be added to the library, and what barriers exist making the library difficult to adopt in the first place

which is fantastic! I think it would benefit you to stick with this and use it when you and/or others review drafts. Asking yourselves things like “does question x either provide input about functionality, provide input about adoption barriers, or provide context for other on-topic questions?” can really help you keep a shorter survey with the input you want and not a bunch of data you don’t need.

Some miscellaneous pointers (found when reviewing my notebooks and different community survey’s I’ve encountered):

  • Say thank you! It’s easy to forget when you are working hard on writing the rest of the survey.

  • Participants like having an estimated time for completion before they start a survey.

  • It’s ethical to say what the data is to be used for and/or who it will be shared with (like is it for the survey author team only, some other core team, or public?).

  • If you want to connect people with your community, you can also end it with a place to follow up on this survey (if results are to be public) or PyQtGraph as a whole

  • Ask yourself what info each question will collect and pause to think if you really need it. A common case of this not being done is when surveys ask a ton of demographic information just because they’ve seen other surveys do it, but they don’t actually need or use that information for their results. Collecting only what you need is respectful to your participants and doesn’t create more unneeded work for whomever is processing the results.

  • Be clear which questions are required and which are optional.

  • If you want to follow up with people, make sure that is an optional “can we contact you for more” type question. And don’t forget to remove contact info from public results.

I’d be happy to give a look at any draft survey when you get there and if you want the feedback. And, of course, I’ll do my best to answer any other questions you have.

2 Likes

Hi @isabela-pf

This is fantastic input, thank you…we’ve started working on drafts with the surveys. We still have a significant more input of yours to incorporate. When we are ready for more input, would you be willing to take a look at what we’ve created and provide some feedback?

Thank you again!
Ogi

1 Like

Of course, @j9ac9k! I’d be happy to take a look at a draft when you are ready. Whenever that is, you can also let me know if you have any specific questions you’d like me to review for and I’ll do my best.

1 Like