Arizona’s next US senator guided by social work

ASU School of Social Work reception honors alum Kyrsten Sinema

December 4, 2018

Arizona’s next U.S. senator Kyrsten Sinema began her workweek by finishing her semester as a lecturer in the School of Social Work at Arizona State University. She submitted her grades for the two classes she teaches, an undergraduate online course titled “Legal Issues in Social Work” and a graduate seminar she taught once a month on weekends titled “Development Grants and Fundraising.”

“As I submitted them I thought to myself ‘I'm so lucky, not only am I preparing to head into the United States Senate to serve our great state as the first woman senator in our state, and as the first social worker, but I'm equally proud of the fact that, as of this morning, I've finished my 16th continuous year of teaching in the School of Social Work at ASU,’” Sinema told an audience at a School of Social Work reception held at the Westward Ho Concho room in downtown Phoenix Monday evening. two women talking Senator-elect Kyrsten Sinema talks with one of her former professors, Elizabeth Segal, at a reception honoring current adjunct faculty member Sinema, in the Concho Room at the Westward Ho in downtown Phoenix, Monday, Dec. 3. Download Full Image

Sinema told invited students, alumni, faculty and staff about her journey into politics. As a social worker in the Sunnyslope community of Phoenix, she saw the community needs going unmet.

“I would go to bed each night thinking to myself, 'This was not enough, I need to do more,'" Sinema recalled. “'The change I need to make in my community was greater than what I was able to do today.'"

Sinema enrolled in the School of Social Work and earned her graduate degree in 1999. She was asked to start teaching a few years later. Paola Villa was a student in Sinema’s fall 2018 graduate class.

“She was outstanding, probably one of the best professors I have ever had,” Villa said. “She had everything very strategically planned and knew what to do. She had deadlines but was always available regardless of being in D.C. She replied to emails, like, day of.”

Villa had her photo taken with Sinema at the event. She's still in awe that Sinema served in Congress, ran for election and taught two classes during the fall.

“Everyone in the class is shocked about how much she does it and we're constantly asking her, ‘How do you do it?’” Villa said.

Sinema's answer? Making sure she took care of herself and setting priorities.

For Watts College of Public Service and Community Solutions Dean Jonathan Koppell, Sinema is an example of what public service can be.

“I am enormously proud, not just that you won ... but how you conducted yourself in the campaign,” said Koppell, a political scientist by training. “It showed how politics can be and I would humbly guess that some of your social work training and experience was relevant.”

Sinema explained how her social work ethos guides her interactions with others, including opposing candidates.

“I pledged early on in my career and I doubled down on that pledge in this race for the United States Senate that I would campaign the same way that I govern, the same way that I teach and the way that I try to live my life — which is to seek understanding of those around me rather than to move forward with a combative attitude,” Sinema said.

For Arizona’s Senator-elect, that means trying to understand where people are coming from and having empathy for other perspectives. She said she wants to learn why people have particular points of view or perceptions, even if they are different from her own.

“I promised myself and instituted this throughout my entire campaign — all 700 or so people who were working for us by the end of that campaign — that we would behave in the highest ethical manner every single day, regardless of what happens in the rough and tumble of American politics today,” Sinema said. “That we would stand up every single day and we would continue to behave according to the ethics of our profession that we would not engage in that low name-calling or the ugliness and the pettiness of personal attacks.”

With about 100 social workers in attendance, Sinema used the opportunity to get recommendations for new staffers. Since her election to the United States House of Representatives in 2012, Sinema has exclusively hired social workers with graduate degrees or MSWMaster of Social Works to serve in her casework office. She plans to hire more as she now represents the entire state.

“I need more MSWs, so please send me your best and brightest, hardest working MSWs,” Sinema said. “I expect that the number of calls we'll receive in our office come Jan. 3 will skyrocket and that is exactly what I hope. I want folks all around the state to wake up and think to themselves, ‘I got a problem with the federal government, something's not getting done. I can call my senator and she and her team of social workers will help solve these problems.’"

James Herbert Williams, director of the ASU School of Social Work, couldn’t help but smile when he heard Sinema say she wants to hire social workers with graduate degrees to solve problems.

"I couldn't see a better role model for students, especially women,” Williams said. “It shows what you can accomplish in life, and the importance of an MSW degree.”

In addition to earning her master's degree in social work from Arizona State University, Sinema earned a law degree from the Sandra Day O'Connor College of Law, a PhD in justice studies from the College of Liberal Arts and Sciences and an MBA from the W.P. Carey School. 

Paul Atkinson

assistant director, College of Public Service and Community Solutions


image title

How smart is the latest artificial intelligence?

December 4, 2018

Checking in on the current state of the research in marrying man and machine — What scientists are grappling with and what the end game looks like

A month ago a group convened in the University Club dining room at Arizona State University to discuss the future of national security research. There were retired Army and Marine generals, agents from the CIA and a bevy of scientists.

Two trendlines popped out over the peppered bacon and frittatas: Nation states are vying for technological dominance, and the Holy Grail in that sphere is the successful pairing of humans and artificial intelligence.

Creating machines that think and act like us is as much grounded in the humanities as it is in engineering. Talk to engineers about the problem, and they’ll discuss things far outside the usual lanes of engineering, things like the nature of self, perception and free will. Designing artificial intelligence is not like making a better refrigerator.

Most of us hear about artificial intelligence in apocalyptic tabloid headlines. Elon Musk says it’s going to wipe us out! Stephen Hawking said robots will take over the world!

Right now, worrying about artificial intelligence doing anything of the sort is like discussing overcrowding in the Martian colonies. It’s so far off it’s not worth talking about.

What’s the current state of the research? What are scientists grappling with now? And what does the end game look like?

How machines learn

“We don’t know how we see the world, essentially,” said Subbarao “Rao” Kambhampati, a professor in the School of Computing, Informatics and Decision Systems Engineering in the Ira A. Fulton Schools of Engineering. Kambhampati is an expert in artificial intelligence, automated planning and machine learning. He is also chief AI officer at the AI Foundation. 

We need to understand how humans work, said Heni Ben Amor.

Ben Amor studies artificial intelligence and human-machine interaction. An assistant professor in the School of Computing, Informatics and Decision Systems Engineering, Ben Amor directs the Interactive Robotics Laboratory.

“In order to create these machines and algorithms that adapt to a human, we first need to understand more about humans,” Ben Amor said. “That grey zone there in the middle, between understanding a human and creating products and algorithms for humans, that’s the interesting zone. That’s what we have to think about at the moment.”

Children see the world, manipulate it, play with it, and then they learn. Artificial intelligence has gone in the opposite direction.

“We see the world by learning how to see the world,” Kambhampati said.

That’s the only way we’ve been able to make machines see: You teach the machine how to recognize a dog by showing it millions of pictures of dogs. Immense databases of labeled images are available, thanks to the internet and smartphones.

Machine learning technology advances through very large sets of examples about patterns we can’t actually describe ourselves.

“We don’t have a theory of a dog,” Kambhampati said. “We see enough examples, and we have some kind of a concept we dial up internally that we don’t know how to articulate.”

Think about what a cat is. Now write a set of examples of what a cat is. Pointed ears, whiskers, long tail and so on. The set of examples will always be wrong in some respect. (Foxes also have whiskers, pointed ears and long tails.)

Enter the huge datasets and the patterns within. That’s where artificial intelligence is right now: using perception as a learning technique. Machines learn by doing and from examples.

“Basically we are trying to figure out how to make learning more efficient,” Kambhampati said.

A hurdle on the way to true AI

Artificial intelligence learns something like we do from mistakes. But no one ever showed you 14 million pictures of dogs. Maybe over a long period of time you’ve seen a million pictures of dogs.

“You do this enough times, we can essentially get a reasonable performance in unseen images with dogs and cats,” he said. “It can actually predict them.”

And here’s the giant road block.

“Explicable AI is a big challenge,” said Spring Berman, an associate professor of mechanical and aerospace engineering. Berman works on the modeling, analysis, control and optimization of robotic swarms. She is also associate director of the Center for Human, Artificial Intelligence, and Robot Teaming — a unit of the Global Security Initiative at ASU. “It’s like a black box.”

When artificial intelligence doesn’t work, no one knows why it didn’t work.

“Essentially there is this issue of what’s called inscrutability, which is, ‘I do it right, but you don’t quite know how I do it right,’” Kambhampati said. “This has led to lots of fears about the use of machine learning. When they work, you’re happy. When they fail, you don’t know why they failed.”

Bottom line, it’s not there yet. An autonomous car can recognize people standing on a corner, but it can’t tell whether they’re going to cross the street or whether they’re just having a conversation.

“I’m not sure machine learning has reached the point where it can extrapolate or be creative like humans are,” Berman said. “There’s a database the algorithms learn from; they can recognize a stop sign in an image or something like that.”

How do we want to relate to our machines? And how do we want them to relate to us? Those are two questions top on the minds of experts.

Human-machine interaction is nothing new, Ben Amor pointed out. Using a VCR was human-machine interaction.

“Most people remember interaction with a VCR as some horrible complex interaction where technically they would have had to read their manual but they didn’t and the rest was confusion everywhere,” he said. “The idea now is to create machines that don’t need a manual. They will adapt to you rather than making you adapt to them through a manual. That would have the advantage of creating a new class of machines that basically customize themselves to the human user.”

Challenges in the past have had to do with humans using a machine the wrong way. Chernobyl is a famous example of this.

“How can we make the robot really intelligent and react to the human partner?” Ben Amor said. “That’s what human-robot interaction is about: How can we have machines that reason about human intent — 'What is the human going to do next and what is his real goal?' — How can they complement our actions to achieve that goal?”

Should artificial intelligence replace us or augment us? Augmentation would be easier to implement.

“It should be as easy to work with them as it is to work with a human secretary,” Kambhampati said. Like a great executive assistant, it should know what you need before you do.

What the future holds

Kambhampati doesn’t believe artificial intelligence will (or should) develop free will. There is a question of trust. If you start working with a machine, after seeing explicable behavior over a period of time, “you will start trusting it,” he said.

People who have worked together for a long time may not need to talk while working, because they implicitly trust each other.

“That can happen between humans and machines too,” Kambhampati said. “The military are interested in getting to the point where there is implicit trust between machines and humans. At the same time, they are worried about that trust being misplaced.”

As it stands now, artificial intelligence is susceptible to being manipulated, if it has learned perceptually from being shown millions of pictures. This is adversarial machine learning, where outside people can manipulate the data such that your machine will suddenly start making catastrophic mistakes with modified dogs, which don’t look like modified dogs to you.

For example: Take a picture of a dog and change some of the pixels. To you, it still looks like a dog, but the machine sees it as an ostrich. The machine is glomming on to some microscopic values in the picture. This is not understood, and it’s a huge worry from a security perspective. If an army drone sees a herd of cattle, and what’s really there is an enemy platoon, that’s a problem.

“Everything can be seen to be anything else,” Kambhampati said.

As associate director of the Center for Human, Artificial Intelligence, and Robot Teaming, Berman spends a lot of time thinking about how humans and machines might team up.

“Our goal is to think about how best to coordinate teams of humans, software agents and robots for a variety of applications which could be transportation, manufacturing, search and rescue or defense,” she said. “We look at creating control strategies for swarms of robots that you could give them a mission and they could then carry it out on their own.”

Potential applications could be search and rescue in disaster scenarios (This has happened once: A lifeguard drone saved two swimmers in Australia in January), environmental monitoring, guarding harbors, doing construction in outer space or exploration.

“Robots are very good at things that people may not be good at, or robots can access hazardous environments, or do repetitive tasks that people don’t want to do,” Berman said.

How far away are we from intelligent robot helpers?

“It’s hard for me to say,” she said. “Because there’s so much work and testing, (autonomous cars) will be widespread before robot swarms.”

Scott Seckel

Reporter , ASU News