This past week, I went to GlueCon in Colorado. I had heard good things about the conference, which focuses on APIs, DevOps and serverless environments. The developer conferences I’ve been too before have been free ones where many talks are just thinly veiled sales pitches. So, I thought I’d try a conference designed more for the attendees than the sponsors. It didn’t disappoint.
I participated in my first hackathon there, and it was a blast. Our team won Best Overall and Best Voice UX. Our app, Treat Yo Self, interacts with users through Amazon Alexa to book flights to popular destinations with Capital One rewards points. It was really fun to build, and I’m thankful I got to work with such a smart and energetic team. That’s my first time working with Alexa, and that actually leads to my biggest takeaway from the conference:
Voice is the next frontier for UX1Mobile is the most recent UX frontier we crossed into.
I had not used Alexa until the hackathon, and I came away impressed. I’ve always avoided voice recognition because, outside of talk-to-text, it’s been bad when I use it. Automated customer service lines don’t recognize what I’m saying very well so I always press keys instead. And when I ask my Bluetooth device in my car for its battery life, it calls my mom instead.
We still have a long ways to go in both the technology2In her talk, Rakia Finley from Fin Digital pointed out that automated voices can’t even pronounce her name right. and adoption3My friend has not opened his Amazon Tap since getting it for Christmas and I had no desire to get it until the hackathon – I just didn’t see the user value in it., but I do think voice is the next frontier in user experience design. Unlike clicking a mouse or tapping a mobile screen, voice is a natural, intuitive way to communicate. It’s just not an intuitive way to communicate with machines yet because machines aren’t good enough at understanding user intent. It will take a while until we’re in the era of voice user experiences. But the big players are all in the space: Google (Assistant), Amazon (Alexa), Microsoft (Cortana) and Apple (Siri). And I can’t speak for the other three, but Amazon has made it dead easy for developers to integrate with Alexa.
APIs will evolve into something much more complex than CRUD
Perhaps the most fascinating talk of the conference was by James Higginbotham of Launch Any.4His newsletter, API Developer Weekly, is a good read that’s hyper-focused on API development. He talked about the future of APIs. My main takeaway was that, since the number of interfaces that interact with our APIs will keep increasing, our APIs will become more complex. They’ll have to support more interactivity, and we may also reach a point where we want to bake more business logic into our APIs. After all, if we have seven interfaces that can interact with the API, we don’t want to have business logic in seven places. Here are some of the specific points James touched on:
- Webhooks will become a more fundamental part of API development, which will enable richer interactions with APIs.
- APIs won’t just return JSON data. APIs will have to support returning data in other formats, like a visual representation of the data in HTML. That way, a visual representation of your data can be shared across a variety of devices where you may not have control over the front-end.5Putting display logic in an API may sound like a cardinal sin. But we wouldn’t duplicate the business logic seven times for seven devices, so why should we duplicate the display logic?
- APIs are going to have to be more adaptive into how much data they take, and perhaps provide feedback on what additional data they need. For voice interactions, maybe your APIs will have one endpoint that will try for an exact match, but fall back to a fuzzy search if nothing matches exactly. Maybe your voice interactions will have to say, “do you mean project A or project B?” Maybe your API will be like a customer service agent who says, “I can’t find you based on your name but I have another way to look you up. What’s your email?” or “I have two people named Sean Williams, what street do you live on?”
James’s whole slide deck is available on Slideshare:
Slides are now available for my #gluecon talk: "API Design in the Age of Bots, IoT, and Voice" https://t.co/NwsgSLs3Zf pic.twitter.com/Co132z2QYP
— James Higginbotham (@launchany) May 26, 2017
Cloud hosting keeps pushing the envelope, and it will keep taking over the world
I am yet to work with functioning microservices. But already, function-as-a-service (FaaS) is here as the next generation beyond microservices. As I learned in our hackathon, AWS Lambda is dead easy to use. Microsoft has Azure Functions and Google has Cloud Functions. There are so few barriers to this serverless hosting: those providers all have free tiers and it requires very limited infrastructure knowledge to get up and running. And there are already companies trying to bring that same serverless infrastructure to enterprise database management: for example, mLab and Fauna both had booths at the conference.
Since I went back to freelance work last September, I’ve noticed that every company seems to be on AWS now. It wasn’t that way just a few years ago: companies would use hosting providers like GoDaddy or a local mom and pop data center. Will the small hosting providers be able to survive with AWS’s growing influence and Microsoft and Google becoming more prominent in the cloud hosting game?
- 1Mobile is the most recent UX frontier we crossed into.
- 2In her talk, Rakia Finley from Fin Digital pointed out that automated voices can’t even pronounce her name right.
- 3My friend has not opened his Amazon Tap since getting it for Christmas and I had no desire to get it until the hackathon – I just didn’t see the user value in it.
- 4His newsletter, API Developer Weekly, is a good read that’s hyper-focused on API development.
- 5Putting display logic in an API may sound like a cardinal sin. But we wouldn’t duplicate the business logic seven times for seven devices, so why should we duplicate the display logic?