So it looks like it’s time to go in to more detail about chatting and filters.

Last time I talked about the basics of dropping or filtering out the pesky obvious four-letter words.  Now I want to dive in to the deeper subtlties of the grey-word filters.

And as I am writing mainly concentrating on the field of children’s online entertainment, I am going to continue to focus in this direction. In other words, as I am not working in the field of the over 18 demographic some of these rules may not apply there.

Hokay, back to kids and games focused on the under-13 year olds. A secondary level of filtering mechanisms has to do with “softer” words that could potentially cause a serious situation depending on the conversation. I’m talking about those conversations that contain no foul language but may in fact end up looking like *somebody* is learning way too much about your kid. These are the “grey area” filtering systems, and may not appear quite so obvious to either the parent or the child playing the game.

Example: kids like to talk about school – it’s an important part of their lives. So they may go online and being talking to their online friends, saying things such as “I got an A on my school paper!” or “At my school, the teachers all make us do lines”. These conversations are perfectly acceptable in every way, however many of us know that a small minority of dangerous people may try to locate a child by asking them questions such as: “where do you go to school?” and “can I pick you up from school?”

Scary? Yes. Even moreso for the folks that work in the field. We few that work in this field are passionate about making sure kids are having fun AND being safe. The great news is that there are  filtering mechanisms being developed to combat these challenges. In many cases, the mechanisms may just filter out exact phrases such as the ones stated above. Other filtering mechanisms offer a kind of weighted system which scores certain and specified phrases according to how dangerous we believe the statements could be (1 – 10, 1’s being an innocuous use of the word within a sentence, 10 being so dangerous we take immediate action). 

The filtering mechanism not only allows us to weigh phrases and words but will also allow us to weigh entire conversations – and then the human element can review the entire conversation for appropriate versus inappropriate behavior. This is where you get your “moderators” – ie: trained staff that read the chat logs and take action. Sidenote:  Moderators are an entirely different blogpost, at a later date 🙂  

Lastly, not all web sites take the time to develop such highly advanced filtering mechanisms. These tools cost a lot of money and take up a lot of time. You can imagine how the human language changes, and is different from culture to culture. Tools like these require constant care and editing. And as there are currently no laws requiring sites to do any of this, there will always be those that would prefer to save their money for other resources, cutting corners in areas such as the above.

A final note: If you are interested in fiding out what your favorite games do with regards to filtering, you should contact their customer support department. Warning! Not everyone is keen on giving away their trade secrets but they should at least be able to help answer some basic questions about how they feel about filtering technology and moderation tools.

That’s it for this time! Remember, feel free to ask any questions below and I’ll be glad to answer best as possible. Until then, enjoy the web!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: