May-Ban Festival And Group Accountability

DDO's already underwhelming Mabar Festival took a turn for the worse last night when an automated exploit detection system carried out one of the largest erroneous banning sprees in recent memory. 

Turbine is trying to downplay the issue by claiming that it affected less than one percent of accounts, but that figure is extremely misleading in a free to play game; the overwhelming majority of "accounts" were not used during the event and therefore were not at risk.  The 1% of players who got hit with the banhammer were the most active players on their servers, and their absence was highly visible in game last night. 

Public Groups and Exploits

Customer service performance questions aside, there's an interesting design issue here.  The group portion of the event used a public instancing system; players had only limited ability to control who would be present in their dungeon for the boss fight.  As this type of public cooperative content becomes more popular - see also Warhammer public quests and even WoW's automated group finder - there's a real question of fairness in enforcing exploit policies. 

If a member of your guild exploits a raid encounter on a group raid, you theoretically bear some responsibility for that action by virtue of choosing to associate with that individual.  (Then again, a dedicated griefer might be willing to join a new guild and take a ban if it brings down a raid full of innocent bystanders along with them.)  When the server provides the group, your ability to avoid benefiting from others' illicit activities is limited.  On the other hand, the developers have no way of determining whether players are complicit out of game, and the ingame consequences of exploitative behavior are identical whether the beneficiaries were willing or not. 

At the end of the day, companies generally have to give players the benefit of the doubt to avoid irritating legitimate customers. It does not matter how good your product is if players are unable to use it due to poorly communicated and unjustified account suspensions.  In particular, permitting an automated system to issue bans outside of business hours, such that it will be over twelve hours before there is even anyone in the office to figure out what went wrong, is just asking for trouble.

Regardless, this is a real challenge for dynamic public content, which is inherently difficult to test to begin with.