Saturday, July 27, 2024

Latest Posts

The Role of Community Moderation in Live Streaming Services

Live streaming as a communication tool creates technology-mediated communities where users share a common interest or goal and where genuine human interaction is a priority. In knowing just how beneficial it is for communities to engage and interact in a virtual context, the size and structure of these communities is often beyond the control of the platform providers, and here is where the need for moderation is greatest.

User-generated live streaming epitomizes this participatory culture where the notion of live performance is extended to an un-broadcasted event between a few friends right through to a worldwide audience. Live streaming centered around esports has experienced significant growth of late and is expected to continue well into the future. Its unpredictability in terms of the outcome of a game, along with dedicated player fan bases, make it an attractive form of entertainment for both gamers and viewers that closely reflects traditional sport.

Research by SEIGMA highlights how 50% of today’s Australians are now online gamers due to the easy access of gaming in the home. Predominantly young adults aged between 18-24, this is symbolizing a new era for how we entertain ourselves and where the growth in consumer demand is leading by way of participatory culture.

Community moderation has become increasingly important in today’s digital platforms as the influx of user-generated content prompts a more distributive content management approach. From social media, online forums and gaming, to live streaming, content sharing platforms are now ruled by a user-centric thrust where ease of use and accessibility enable broad participation.

Benefits of Community Moderation

There are various methods of community moderation, and the concept of a tool is something that I will address in a later part of this guide. As this writing is focused on gaming and gamer culture, the primary form of moderation intended will be influencing/controlling behavior and maintaining a level of control between players, rather than, say, an administrator controlling the forum posts of its users. Evidently, you can’t have a group of players behaving inappropriately without it having an effect on their gaming experience, and likewise, if everyone’s game experience turns bad, the game is probably no longer worth playing. This is an ideal situation for a developer to have some form of in-game moderation tool available to ‘reset things’, but for now, we simply wish to prevent these negative situations from happening.

Moderation is the stepping in, or supervision, of a third-party figure to regulate behavior or maintain a certain level of control with the intention of preventing disaster or conflict. The concept of moderating a single individual is difficult to grasp in itself, let alone a group of unruly gamers having an X-fire public chat or a very amusing conversation over Ventrilo. This is why there is a growing desire for some form of moderation tool designed to control what happens in various games, chatrooms, or programs.

Before we embark on the search for the perfect socks, we need to seek a clear understanding of what community moderation entails. Now, the term ‘community’ can be daunting, with various perceptions as to what this constitutes. Some may believe that a community is hundreds or possibly thousands of online users, lurking in the depths of a chat room or forum. On the other hand, a community could simply be a small group of friends or acquaintances, coming together to play multiplayer games or organize a gaming event. In simple terms, a community is the coming together of more than one individual, thus forming some sort of social structure among the users, whether it’s a loose-knit group or a more formal gathering.

Maintaining a Safe Environment

Content for this section is coherent with the summary of the entire essay, reflecting its key ideas and themes. The goal of maintaining a safe environment during a live stream is to create an environment where users can freely interact with each other without fear of being negatively affected. Unsafe environments dissuade user interaction and encourage viewers to leave the stream shortly after visiting it. If the stream contains inappropriate content, it may damage the reputation of the streamer, their brand, and the community the streamer has built. Sponsored streams with a damaged reputation may pull content from the streamer and the community, and potentially end a streaming career. Viewers are more likely to stay and return to a stream if they feel comfortable and welcomed. Communities with tight-knit relationships between viewers and where the streamer can actively engage with the viewers are more inclined to be successful. By moderating a live stream to provide a clean and safe environment, moderators are upholding a higher standard so users can enjoy their time and continue to visit and support the streamer.

Promoting Positive User Interactions

The implementation of community moderation also provides a framework for appropriate interaction and social norms within the community. This is achieved with the creation and enforcement of rules and guidelines with respect to user interaction behavior. With a clear set of rules to follow, members are less likely to exhibit anti-social behavior and are more likely to develop relationships with other members, thus improving overall interaction within the community. The rules and guidelines set in place can act as a contingency for undesirable behavior with warnings and by advising the user on which aspects of their behavior need to be changed. This is preferable to disciplinary action and is an effective method for behavior change in a user without isolating them from the community.

When providing the community with moderation abilities, it can also help the community to maintain a positive environment and develop appropriate relationships among users. Essentially, this feature helps to create a comfortable environment that is necessary for positive interaction. Unwanted disruptive behavior and hostility are deterred with visible consequences. A new member is less likely to engage in disruptive behavior or exhibit hostility if this results in them being isolated from the rest of the community. This in turn allows more experienced members to feel at ease and in control, providing them with the opportunity to assert themselves and take on a leadership role within the community.

Preventing Inappropriate Content

An automated system for preventing inappropriate content utilizing image processing techniques to filter unsuitable media would be ideal. However, such technology is currently not foolproof and may result in the accidental removal of acceptable media. An alternative method could involve giving viewers the ability to instantly block unsuitable media; an action which could, in turn, temporarily block the content provider’s access to the streaming service. This method is an example of a social moderation system and could serve as a form of instant justice for those who attempt to broadcast unsuitable material. However, such a method would need careful testing and possible simulation in order to ensure that it does not provide an opportunity for the user to abuse the system in attempts to silence legitimate media providers.

In a live streamed environment where the viewers have the ability to instantly submit content that will be broadcast to a wider audience, preventing the broadcast of inappropriate content is of particular difficulty. Due to the live nature of the content, a potential infinite number of viewers and difficulty in attaching metadata to media in order to filter it, a live streaming environment is particularly susceptible to inappropriate content, and there have been many instances where unsuitable material has been broadcast to a wide audience.

Preventing inappropriate content is a key component of community moderation and is considered to be a critical feature of any online community. Implementing systems in which users can tag inappropriate content, creating a clear set of guidelines dictating what is and is not appropriate, and defining roles responsible for removing inappropriate content are all methods in which communities attempt to reduce the presence of inappropriate content. However, the task of preventing it is a continuous challenge.

Challenges of Community Moderation

Another issue is managing member behavior, particularly dealing with disruptive users known as ‘trolls’ and preventing flame wars. Trolling is a common problem in communities with open discussion. Ignoring trolls is advised, as directly confronting them often inflames the situation. Preventative measures like word filters and forced moderation for new members can reduce trolling, but no community is immune. Having clear policies on acceptable behavior and disciplinary action can contain the impact. However, bans for disruptive members can lead to off-topic discussions about them, attempting to provoke a reaction and reinstatement of their posting privileges.

Research into community moderation highlights the challenges facing moderators, such as balancing free expression of ideas with control and censorship of indecent or harmful material. The effectiveness of a community depends on its ability to promote open discussion. When moderation policies are seen as too strict, users may seek out other information sources. Communities also moderate offensive or off-topic posts to maintain civility. Striking a balance between open discussion and civility is difficult, leading communities to swing between heavy-handed moderation and permissiveness.

Balancing Freedom of Expression and Censorship

IP bans and the like are largely ineffective due to the dynamic nature of modern IP assignment. As Twitch.tv does not offer chat room moderation functionality, this often means the usage of a third-party IRC server and bots, leaving the moderator with the burden of micromanaging access lists. This, obviously, is not an ideal situation.

Nowadays, however, many popular streamers maintain a community outside of their stream via a forum or chat room, and moderation may be required for these additional services. This moderation is typically done by the streamer themselves, who usually assign moderator privileges to a trusted viewer. This viewer often serves as a go-between for the community and the streamer, handling situations in the streamer’s absence.

In the past, negative behavior primarily fell under the jurisdiction of a website’s terms of service and website-wide admin intervention with tools such as IP bans. Few gaming communities had volunteer moderators, and if they did, moderation was limited to the particular community’s forum or Ventrilo server.

Streams with a large number of viewers or those highlighted on the front page often attract viewers looking to have a bit of fun, often at the expense of others. This can range from pure disruptive trolling to the deliberate targeting of individuals for malicious intent such as defamation and hate speech. Given the often tight-knit nature of gaming communities and their associated live streams, these behaviors can be very damaging to a person or community and can have lasting effects.

In the case of live streaming services, there is often very little filtering on what can be broadcasted, leaving a very wide range of content. The live and unscripted nature of streams warrants that community moderation often takes a back seat to the stream itself, leaving moderation duties mostly in the hands of the broadcaster.

The ability for individuals to freely express themselves is an important value in any modern democratic society. This is especially true in the United States, where free speech is an integral part of the cultural identity. This value tends to carry over to the internet where anyone is potentially able to have their thoughts heard by a wide audience.

Dealing with Trolls and Harassment

Traditional online disinhibition is a concept that explains the troll phenomena in terms of anonymity and decreased inhibition to social consequences. Their two-dimensional model explains internal and external disinhibition as factors of alcohol consumption, social grooming, integration to social groups, and neurotic or psychotic inclination. This model could be well applied to future work on online disinhibition specifically in gaming due to the many parallels with real-world behavior and traditional online gaming culture.

It is often seen that the basic troll would not cause any effect from their initial negative behavior because it is likely no one at any administrative level would take notice of this action. The more dedicated troll may deliberately test systems, push boundaries, and cause escalating disruption; this is often the point that communities or players will call for some kind of intervention or moderation. An example of such action would be someone creating an offensive character name, going as far as they can with it until they are forced to change it.

Trolls are an internet plague on many forms of communication, and the phrase “do not feed the trolls” has become a standard byword for the advice of ignoring communication with trolls.

Best Practices for Community Moderation

Establishing Clear Community Guidelines

For some, the most ideal system might be the social contract theory wherein the guidelines are a continually evolving body of legislation created through discussion and consensus. While that may or may not be feasible, it still starts with a simple written document whereby the group leaders can set metaphysical boundaries as to the kind of community they wish to foster and the direction in which they hope to steer it. Revisiting our earlier discussion, the more that gaming communities become alternate realities, the more important it is to have a clear directive from the top, as we would expect from a state to its citizens or a guild master to his clan.

A community moderation has never been straightforward, and the bottom line is that it is an ongoing exercise in human resources. To minimize potential downsides, we recommend that the first priority of any group leader or founders of a community is to create systems of governance and a body of laws or guidelines. These can take many forms, ranging from the more open-ended, emergent rules of something like LiveJournal to the very explicit terms of service and rules set out by a professional community site or a guild in an MMO.

Training and Empowering Moderators

Second, moderators should be familiar with the community guidelines and the streamer’s expectations for their behavior. In multiple studies, awareness and knowledge of expected practices has been linked to better performance as well as pro-social behaviors and compliance. For this reason, the streamer may consider holding an orientation or training session for new moderators to go over the guidelines and a few hypothetical situations. This also is an opportunity to reiterate the terms of involuntary volunteer work and the lack of tangible compensation, to ensure that the moderators are not developing unrealistic expectations or a sense of entitlement.

In a similar vein, it can be helpful to have a centralized location for information concerning banned users, reasons for bans, and the duration of the ban. Some larger communities have a separate website or spreadsheet for this purpose, but a document or notepad within the Twitch channel can also be effective.

When the streamer hires moderators to help manage their chat room and enforce community guidelines, there are a few ways they can help set their moderators up for success. First, it’s important to have a method of communication between streamer and moderators, as well as between the moderators themselves. Some communities use voice chat programs like Teamspeak or Skype, while others create Steam groups or private subreddits. Whatever the method, it’s important for moderators to be able to quickly and efficiently convey information to each other and to the streamer, particularly in the case of problem viewers who may need to be banned. Having a unified method of communication also helps to build a sense of camaraderie and teamwork among the moderators.

Utilizing Automated Moderation Tools

Due to these potential issues, virtual moderation should be used only as a supplementary to live human moderation. Automated message validation can be a much more reliable tool in the absence of a live community moderator. Functions can range from a total lockout of undesired words or phrases to pattern recognition of improper language.

Virtual moderators are AI programs that are designed to simulate conversation with users while enforcing the community values and guidelines. These tools, assuming they function effectively, can be a major asset to a website’s continuous effort to maintain community values. However, using them as a sole solution can have a negative outcome. Many virtual moderators are fairly primitive in their programming and database, and though they’ve been given a task, they may not understand the execution. This can lead to communications with users that can seem strange out of context, and in some cases, can invoke negative responses from other community members. In more severe cases of a breakdown in communication, it’s not uncommon for users to use a virtual moderator’s automated programming against it in an attempt to cause trouble within the community.

Automated tools for moderation are being used on many of the popular websites and can offer an enticing solution. The internet is a never-sleeping universe in which millions are logged on to various websites at all hours of the day or night. The ability to uphold the community guidelines and values in the absence of human moderators is an extremely valuable asset to many website owners and can be done through various automated moderation tools. These tools can provide a wide range of functions from automatic message validation and spam filtering to providing a virtual moderator that can assist and aid players.

Latest Posts

Don't Miss