Seeking Advice on User Panels & Feedback Groups

Hi everyone,

Our product team regularly makes decisions about Brightspace configuration, features, and rollouts—but we've realized we're often doing this without systematic input from the people who use the platform every day: our faculty and students.

We're planning to establish user panels or feedback groups to get ongoing, structured insight into how Brightspace is actually being used. We're in the early stages and would love to learn from others who have implemented something similar at their institutions? We are curious about:

  • How to structure panels (frequency, size, participant selection)
  • Which methods work best for gathering meaningful feedback
  • Any challenges you encountered and how you addressed them
  • Whether this actually influenced your decision-making (and how)

If your institution has established user panels, advisory groups, or any structured feedback mechanism, I'd really appreciate hearing about your experience—what worked, what didn't, and any advice you'd offer.

Thanks in advance for any insights you can share!

Mads Holst
Product Manager, AU, Denmark

Answers

  • Sangeetha.T.629
    Sangeetha.T.629 Posts: 185 🤝🏼 image

    Hi @Mads.H.1,

    Thank you for reaching out to us through the Community!

    I’ve reviewed your discussion questions, and while each institution approaches this differently, I can share a few suggestions from the Brightspace perspective that may help as you explore structured feedback models.

    One option many clients find effective is using existing Brightspace tools, such as Surveys and Discussions, to gather feedback from a random or representative group of users (faculty or students). These tools offer flexibility and can help you capture ongoing insights without having to build a full advisory structure immediately.

    • Surveys can be set to Anonymous, which encourages more honest and detailed feedback from users who may hesitate to share concerns openly.
    • Discussions can be configured for open feedback threads or targeted conversations with selected users about specific workflows, new tools, or configuration decisions.

    This approach can help you surface pain points early, identify trends, and better understand how users are interacting with Brightspace in their day-to-day teaching and learning.

    If you decide to build more formal panels later, these tools are also a great first step for identifying potential participants and gathering baseline data.

    Please let me know if you have any concerns.

    Thanks,

    Sangeetha

  • Mads.H.1
    Mads.H.1 Posts: 3 🌱

    Thanks Sangeetha — good suggestions, and I agree that Surveys and Discussions can be a useful starting point.

    What I'm after is something a bit more structured and ongoing, though. Let me share where our thinking is right now — but please keep in mind that this is very much a sketch… reality will no doubt have opinions of its own.

    We're looking at setting up faculty-based user panels — one per faculty (five in our case), each with 5–8 instructors. They'd meet with our product team twice(?) a year in a workshop format where we walk through planned changes or new features and get structured feedback and ideas before we commit to decisions.

    The way I see it, a few things matter:

    • First and foremost, panels give each faculty a direct channel to influence how Brightspace develops at our institution — so it doesn't quietly drift in a direction that overlooks their specific needs and workflows.
    • It also has to be a genuine dialogue. Panel members bring insight from their colleagues and their daily teaching practice, we bring the roadmap and our rationale. Better decisions come from that exchange.
    • And to be clear — panel members wouldn't be stakeholders or decision-makers in any formal sense. They're experienced users whose perspective helps us make better-informed choices.

    What we're really trying to do is move from reactive support requests to proactive, structured conversation — and ground our decisions in how Brightspace is actually used in teaching… something that is ever changing.

    I imagine many other institutions have run into similar questions during their own Brightspace implementation or transition — whether or not it ended up as a formal panel. I'd love to hear what they have tried.

    How did they find and select participants? What made sessions productive (or not)? And did the input actually change decisions — and if so, how did they communicate that back?

    Any experience — formal or informal — would be really helpful at this stage.
    - Mads