image title

ASU team takes cyberbullying app public

More than half of adolescents experience online bullying.
ASU BullyBlocker app combines computer science with psychology insights.
September 20, 2017

BullyBlocker an interdisciplinary effort, goes beyond abilities of other apps by combining risk factors with keywords to alert parents

Last December, as other teens were looking forward to the holiday season and planning outings with friends and family, Houston-area high school student Brandy Vela was feeling so overwhelmed by online harassment that she held a gun to her chest and pulled the trigger.

Vela's death is an extreme example of what can happen as a result of cyberbullying, but a 2016 paper co-authored by Yasin Silva, associate professor of computer science at Arizona State University, cites a statistic that more than half of adolescents have been bullied online.

Just this month, Silva and his New College of Interdisciplinary Arts and Sciences team of faculty and students announced the public availability of BullyBlocker, a smartphone application that allows parents and victims of cyberbullying to monitor, predict and hopefully prevent incidents of online bullying.

The first version of the app is currently available for free in the Apple app store, and the ASU team has received a nearly $300,000 grant from the National Science Foundation to continue research and development of subsequent versions.

While there are other cyberbullying applications available, BullyBlocker is different in that it is the first and only application so far to do more than just flag potentially harmful posts and comments.

Existing apps comb through content on a person’s social-media profile looking for keywords or phrases that could indicate bullying and alert the user of the app — the user of the app can be a parent, guardian or even the victims themselves.

“We are going beyond that,” Silva said. “That’s just step number one in our process.”

BullyBlocker not only identifies those kinds of threats, it combines that information with risk factors (also called states of vulnerability) that have been shown to increase the probability of bullying, such as whether a person has recently moved schools, their socioeconomic status or their race. The app calculates the probability that an adolescent is being bullied based on keywords and risk factors, then alerts the app user — who most likely would be a parent or guardian, helping them to be aware of what is happening in their child's life.

“We’re trying to use a more holistic approach to really consider all the different signs and factors that might be involved in cyberbullying,” Silva said.

people in a conference room

Yasin Silva (far right), ASU associate professor of computer science 
meets with the BullyBlocker team. Photo by Deanna Dent/ASU Now

Doing so has been a truly interdisciplinary undertaking. When the idea for BullyBlocker came about in 2013, Silva quickly realized that in order to have the best, most accurate model, the application would need input from areas other than applied computing.

“From the computer science side, we are experts in terms of data analysis and creating computational models and so on,” Silva said. “But we didn’t know much initially, and I think still our understanding is limited, in terms of what is the nature of cyberbullying? How does it happen? And when it happens, what are the factors involved?”

So he approached Deborah Hall, ASU assistant professor of psychology, who was excited to collaborate. Hall and her psychology students were able to provide insights Silva and his computer science students may not have considered.

At a recent BullyBlocker meeting, the team was discussing the risk-factors portion of the app. They were considering adding sexual orientation to the list, as many LGBT youth experience a higher rate of bullying than their heterosexual peers. Hall pointed out that there could be an issue if the LGBT youth had not come out to their parents, who may be operating the application.

“With each meeting, my knowledge of the computer science and applied computing side of things is getting incrementally larger,” Hall said. “But what attracted me to the project in the first place … was the potential to take some of the research literature and findings from psychology that have been published in more traditional academic outlets and find a way to incorporate them in a meaningful way that will have some real-world consequences.”

Video by Lexi Dakin 

BullyBlocker came about from the desire to make a positive social impact through use-inspired research. More than one of the student team members have been supported by NCUIRE, New College’s Undergraduate Inquiry and Research Experiences program.

One of them, Rusty Conway, said that along with giving him real-world experience that has helped build his computing skills, working with the BullyBlocker team has been meaningful because “it’s going to have some kind of social impact that I can be a part of and hopefully be beneficial to people.”

The first version of BullyBlocker that is available now works in conjunction with Facebook. Future iterations are in development to incorporate other social-media platforms like Instagram and Twitter. Funding from the NSF grant will be helpful to that end, Silva said, as well as making further interdisciplinary collaboration possible.

“I really enjoy watching the interactions and how [we all] grow in understanding of relevant theories,” he said. “I see the potential of even greater impact coming out of these collaborations.”

Top photo: Associate Professor Yasin Silva (center) of the School of Mathematical and Natural Sciences stands along with his team of students and faculty members from different disciplines during the BullyBlocker app's group meeting at ASU's West campus on Sept. 13. The team created an app that can detect bullying but also provides resources to the individual being bullied. Photo by Deanna Dent/ASU Now

 
image title

The ethics of advertising hate speech

September 20, 2017

ASU marketing professor shares insights on Facebook, Google advertising practices

Facebook and Google have recently come under fire for allowing advertisers to target ads towards users who express an interest in hate speech or racist sentiments. Facebook has also been criticized for allowing Russian-linked accounts to purchase thousands of ads intended to influence the presidential election.

Bret Giles, professor of practice in marketing in the W. P. Carey School of Business at Arizona State University, shares his insights on the ethics of digital advertising in light of these events.

Question: What responsibility do companies like Facebook and Google have to consumers when it comes to monitoring and regulating the use of their advertising platforms?

Bret Giles

Answer: Just as companies of any kind have a responsibility to create a safe environment for customers and employees, so too do Facebook and Google have an inherent obligation to create such an environment within their platforms. For digital platforms such as these, much of what can be done to manipulate them outside their intended purpose is still being learned, is difficult to anticipate and is evolving exponentially. This doesn’t negate any responsibility on the part of Google or Facebook, but it does highlight the difficult balancing act of providing scalability through technology and machine learning with accountability and oversight.

Q: What business risks do these large tech companies take by becoming associated with advertisers that target users open to hate speech?

A: Facebook and Google’s ad platforms are designed to deliver the most relevant advertising possible at an individual level, hopefully giving people a chance to discover products and services they would otherwise never see. Those very platforms can also be used in unintended ways that hurt people. While most people probably don’t think that Google or Facebook are purposefully providing a venue to encourage hate speech among advertisers, the question is fair in terms of what they might or might not be doing to actively prevent it. In this instance, the risk of inaction is substantial, which is why we have seen swift action to make necessary changes to both systems. Not only may people see the platforms in a negative light, but longtime advertisers may also become worried — and both of those actions have negative financial repercussions.

Q: Going forward, what can Facebook do to prevent these kinds of incidents from occurring?

A: Facebook continues to learn how people behave, both good and bad, within the advertising platform they offer. It is through this continued learning from which they should draw, expanding their emphasis not only on preventing incidents, but also on anticipating behaviors as their platform matures. As marketers, we speak of empathy in understanding people’s needs by gaining their perspective and looking at the world through their lens. The same holds true here. The goal should not be to play catch-up, always one step behind how the platform might be ill-used; instead, it should be to learn from those perspectives so you can anticipate what is possible before it becomes an unintended reality.

Q: How can social-media users protect themselves against fake, misleading or hateful ads?

A: The best protection is to always remember there is a motivation behind each and every ad out there, be it on a social-media site, a search engine, a television channel or in a magazine. The goal of the ad is to get the user to take some sort of action, and that action may or may not be in their best interest. When something doesn’t seem quite right, don’t second-guess that concern. Look to other trusted sources to corroborate the ad or social-media post. Technology is available that can assist in this effort. YourAdChoices allows a user to control how sites can use their personal information to target the advertising they see.

Katherine Reedy