Roundup

05Feb24 Roundup: Danger, Will Robinson, Danger!

CX's metaphorical Robot is sounding the alarm. Also, some good news and lots more Bad Job Bingo.
A gif of The Robot from the original Lost in Space waving his arms in alarm
In: Roundup, News, If It's Neutral It's Not Technology, AI, DPD, ChatGPT

Another big thank you to all my subscribers, especially my 8 paid subscribers – y'all are amazing. And a short ask: if you're enjoying this newsletter, will you share it with your friends? It's a great way to get content like Bad Job Bingo in front of the people it can help the most.

In today's Roundup:
News from Around Supportlandia (and Beyond)
And Now for Some Good News
Read, Watch, and Listen
Get Hired
Upcoming Events


News from Around Supportlandia (and Beyond)

Danger, Will Robinson, Danger!

A few weeks ago, shipping and delivery company Dynamic Parcel Distribution had to disable part of its AI chatbot Ruby after a customer got so frustrated with its inability to help him locate a package or contact support that he convinced the chatbot to insult its own company through, among other things, poetry.

The incident is reminiscent of a conversation from late last year in which a customer managed to manipulate a Chevrolet dealership’s ChatGPT-powered chatbot into agreeing to sell him a 2024 Tahoe for $1.00, with the chatbot declaring, “That’s a deal, and that’s a legally binding offer - no takesies backsies.”

And sure, these seem like funny, relatively harmless examples of what can happen when companies don’t show enough care in implementing customer-facing generative AI. 

Except generative AI itself isn’t harmless regardless of its application, something that’s been especially driven home in the last few weeks:

  • On January 22nd, a robocall using AI-generated audio of President Biden instructed Democrats in New Hampshire not to vote in the presidential primary. This prompted Federal Communications Commission (FCC) Chairwoman Jessica Rosenworcel to propose the FCC vote on recognizing AI-generated voices as “‘artificial’ voices under the Telephone Consumer Protection Act (TCPA), which would make voice cloning technology used in common robocall scams targeting consumers illegal.”[1]
  • Last week, abusive, AI-created pornographic images of Taylor Swift spread from a Telegram group dedicated to creating nonconsensual porn of women to 4chan and then to Twitter,[2] forcing Swift fans to flood the platform with “Protect Taylor Swift” tweets in an effort to drown the offensive images out of search results.

Generative AI has been an incredibly popular topic in every Tech and CX Slack community in which I’ve been a member since OpenAI introduced ChatGPT. Every conference I’ve attended for the last year has had at least one – but often more – panel or presentation about how generative AI is going to change the CX landscape. 

Every interview or consultation I’ve had with a tech company executive team – without fail! – has involved talking about implementing generative AI in customer support in some capacity in order to remain competitive.

And it’s not like we haven’t discussed the negatives of generative AI. We’ve talked a lot about the risks that AI hallucinations or poor training and implementation pose to the customer experience and the threat AI poses to CX jobs, and for good reason. 

But these generative AI incidents have highlighted for me how much we’ve been focused on keeping up with the speed of AI’s evolution and using it to our businesses’ advantage; in the process, I think we’ve missed some alarms.

As Lee said in his report: 

While the proliferation of use cases for LLMs marks a new era of AI, we must be mindful that new technologies come with new risks, and we cannot afford to rush headlong into this journey. Risks already exist today that could serve as an attack surface for this [Proof of Concept]. [...] Generative AI beholds many unknowns, and as we’ve said before it is incumbent on the broader community to collectively work toward unfolding the true size of this attack surface — for us to better prepare for and defend against it.

He’s right. There are questions that the CX community can and should collectively ask and start trying to answer, like:

  • What trust, safety, and security risks are going to emerge in the customer service space as generative AI continues to evolve?
  • How do we prevent bad actors from using AI chatbots and other customer-facing AI products as attack vectors?
  • How do we detect and combat bad actors who are using generative AI and deepfakes to hijack customer service interactions?
  • How do we moderate our communities for and protect our users and employees from abuse created using generative AI?

Above all, though, I’d like to make a wider call for an end to keeping up and the start of slowing down when it comes to AI in general. I review a job from AI company Scale later in this issue, and (spoiler alert) it does not fair well, partially because of the company’s mission and partially because of how that mission ultimately affects the advertised role. 

Scale’s mission reads, in part:

Our mission is to accelerate the development of AI applications. Better data leads to more performant models. Performant models lead to faster deployment.

Which results in a trust and safety role being introduced like this:

We are growing operations rapidly, on-boarding new customers, and launching products all the time. This raises new strategic questions we need to answer as well as tactical challenges we need to overcome.

This approach – one of developing AI technology as quickly as possible and then expecting your trust and safety team to somehow mitigate its risks after the fact – seems like a prime example of rushing headlong into a journey we don’t really understand and aren’t really prepared for. 

I wish I could say Scale is alone in this strategy, but it’s not. We all have stories of company leadership teams forging ahead with ill-advised AI implementations, which is how you get chatbots cheerfully selling brand-new Chevy Tahoes for a dollar or composing haikus about how useless they are.

And lest this newsletter be besieged with stalwart techno-optimists and bad-faith actors arguing that I just don’t understand AI or that AI will somehow solve the problems of its own existence without human intervention, let me direct you to a paper I read ages ago called If It’s Neutral, It’s Not Technology.

I recommend reading the whole article (it’s not long, and it’s free), but if nothing else, read this:

[No] one is arguing that technology is in charge, except to the extent that we willingly surrender control to the technological imperative, and find ourselves in a trap of our own making. And it is hubris to imagine that we are entirely in control of our circumstances, whether individually or collectively. We introduce new technologies into our social systems, and we cannot fully predict or anticipate the effects that the changes will bring about. We exist in a dynamic relationship with our technologies, and they feed back into us, and altering us. As John Culkin (1967) put it, "we shape our tools and thereafter they shape us."
We are neither fully in control nor fully out of control; we function in the gray area in-between. And if there is to be any hope of improving our locus of control over our technology, it requires the cultivation of a reflective and critical approach to human invention and innovation, a willingness to question the necessity of a given innovation, to ask what the cost might be and whether it might outweigh the benefit, and to keep in mind that we will not be able to anticipate all of the effects stemming from its introduction.

To wrap this up, I know I’m preaching to the choir here. But I think it’s worth saying for anyone tuning in who isn’t yet convinced.

Generative AI isn’t neutral, nor is it entirely good or entirely bad. And, as every pundit from here to Mars has proclaimed, it’s certainly not going away. But just because it’s not going away doesn’t mean we should mindlessly hurry it along in the name of progress, giving up all hope of shaping it for the better. 

I would argue that we have a duty to our customers to enable the good and mitigate the bad, and this is our opportunity to do that.

Our metaphorical Robot is sounding the alarm: Danger, Will Robinson, Danger! 

It’d be cool if we listened.

Further Reading

I couldn’t fit these into today’s main story (dog knows I went on long enough), but here’s some other AI-related pieces that are worth reading:

And Now for Some Good News

These folks were hired, promoted, or retired. How cool is that?

It's Dangerous to Go Alone, Take This

As much as I love the designed Bad Job Bingo cards, I know they're not the easiest to use when you're in the middle of a job hunt. That's why I made a version you can play with as you evaluate jobs: Bad Job Bingo To-Go!

Support Human Job Board

The job board got a new, fresh look this week! It's now easier to browse, search, reorder, and share jobs, and there's an RSS feed now too if that's your jam. I'm considering other improvements as well, so if there's something specific you'd like to see, let me know.

Professional Helpers: Weekly Job Drop

Speaking of job boards, Ashley Hayslett just got hers up and running! She posts CX-related jobs in an easy-to-use Notion doc weekly, so if you're looking, check it out!

Read, Watch, and Listen

Read

All Tech is Human highlighted 10 new roles in Trust & Safety in their newsletter this week (and I played Bad Job Bingo with the CX-related ones in Get Hired to help our T&S folks).

The Carnegie Endowment for International Peace released Countering Disinformation Effectively: An Evidence-Based Policy Guide, a high-level, evidence-informed guide to some of the major proposals for how democratic governments, platforms, and others can counter disinformation.

Camille Acey wrote about how to manage a customer support team (when you're an engineering leader who's never done anything like that before).

Yas Longoria wrote about how customer support is not a cost center (with data!).

Neal Travis started a Support Terms Glossary, and he's already defined over 50 CX-related words. Word nerds unite!

Rob Dwyer wrote about how to write feedback that drives change for Happitu.

Mathew Patterson shared 13 response templates for tricky customer service emails for Help Scout.

Tim Jordan wrote about how to launch an effective customer knowledge base in the next 30 days (even on a limited budget) for KnowledgeOwl.

Catherine Heath wrote about how hiding your phone number isn't self-service success, also for KnowledgeOwl.

Diana De Jesus launched the Evolve newsletter, a newsletter dedicated to developing the skills and insights of customer success managers.

Hailley Griffis wrote about why she's stayed at Buffer for eight years.

I wrote about the difference between internal and external knowledge bases (and why you need both) for Tettra.

Watch

Stonly talked to Cheryl Spriggs and Zachary Lee in their webinar on how to make your support team more productive without burning them out. (Sorry, it's behind a contact info wall, but the webinar looks good enough to justify it this time.)

Next in Queue talked to Russel Lolacher about how leaders can motivate and share time with their teams.

The Ticket talked to Declan Ivory and Anthony Lopez about Intercom's 2024 Customer Service Trends Report (not linking directly because it's behind a contact info wall and you can get to it from the podcast episode page).

CX Passport talks to Adam Haesler about not making customers respond multiple times to support in order to get their issues resolved.

ActiveFence developed a Trust & Safety course with New York University's Bronfman Center, and has made the course free for everyone.

Mercer Smith gave us a sneak peek at the author's proof of her upcoming book, CXOXO: Building a Support Team Your Customers Will Love.

They're CSMs, of course.

"They just wish they were back with spreadsheets!"

Listen

The Customer Support Leaders podcast talked to Hilary Dudek about unpacking the journey of career transition after a layoff and how to be intentionally you in those first few days.

Growth Support talked to Kat Gaines about incident management and how to get set up to handle the chaos effectively.

Carly Agar launched her new podcast, The Customer Success Career Coach.

Get Hired

I play Bad Job Bingo with every job listing that appears in the Roundup and categorize them according to how well (or poorly, if I hit Bingo) they do in the game.

However, please remember that a job appearing in a positive category isn’t an endorsement of any role or company, and a job appearing in a negative category doesn't mean I think you shouldn't apply if it works for you. Bad Job Bingo is simply an effort to give you a shortcut to finding roles that may match your needs and values.

These and past contestants can be found at Support Human Jobs.

Green Means Go

No flags, or green flags only! A true unicorn.

  • Policy Design Manager, Child Safety and Emotional and Psychological Harm ($200k-$240k) at Anthropic (Hybrid US-San Francisco or New York City, Office presence 25% of time required)
    • Yes, it's AI. I'm as surprised as you.
  • Director of Product, Support Services ($190k-$230k) at Alma (Remote US)
    • I'm tentatively putting this in Green Means Go because they state that they don't negotiate salaries. In the context of their Careers page and the job description (they explain where they hire people in terms of salary and say they do yearly comp reviews, respectively), it doesn't seem like a flag. But I would definitely recommend you ask for whatever details and data they can give you on this to make sure it truly is an equitable offer for this role's level/band.
  • Trust and Safety Analyst ($170k-$200k) at Anthropic (Hybrid US-San Francisco or New York City, Office presence 25% of time required)
    • Yes, it's AI. I'm as surprised as you.
  • Product Support Specialist ($115k-$130k) at Anthropic (Hybrid US-San Francisco, Office presence 25% of time required)
    • Yes, it's AI. I'm as surprised as you.
    • The job description is honest and straightforward, the compensation is fantastic for a role at this level, the benefits are great, the job application is thoughtful and not burdensome to applicants, and their Careers page is clear and informative. Wild, huh?

Eh, It’s Probably Fine

A few flags popped up, but no serious ones.

  • Vice President, Global Incident Management & Escalations ($213k-$419k) at MongoDB (Hybrid US-New York City, San Francisco, Toronto, Canada, Dublin, Ireland)
  • VP, Trust & Safety ($268k-$360k, location-dependent) at Upwork (Remote US)
    • Don't love their use of "evangelize" in regards to working with stakeholders; there are plenty of alternatives for describing advocacy that don't have religious-extremist undertones.
    • Also don't love them saying "What it takes to catch our eye" instead of something like "Desired Qualifications." Tastes like negging to me.
  • Trust and Safety Enforcement Lead ($240k-$285k) at Anthropic (Hybrid US-San Francisco or New York City, Office presence 25% of time required)
    • I always want to call attention to the fact that companies are very good at acknowledging when certain roles might come into contact with disturbing content but are very *bad* at addressing how they plan to support your mental health post-exposure to said content. Unfortunately, Anthropic doesn't appear to be different in this respect, and I'd encourage anyone who ends up interviewing for this role to ask some pointed questions about it.
  • Trust and Safety Responsible Deployment Policy Lead ($240k-$285k) at Anthropic (Hybrid US-San Francisco, Office presence 25% of time required)
    • Same note as with the other T&S Lead role above.
  • Senior Director, Trust & Safety ($175k-$200k) at Kickstarter PBC (Remote US-Certain States, CAN-Ontario & British Columbia, United Kingdom)
    • Excellent benefits, including 4-day work weeks.
    • That *might* have something to do with the Kickstarter union.
    • I'm just saying.
  • Director, Digital Customer Service and Support Analyst ($130k-$165k) at Gartner (Remote US)
    • Job application is through Workday. My condolences.
  • Senior Community Manager ($130k-$190k) at Second Dinner ($130k-$190k)
    • This position is under Marketing, which seems like an odd choice.
    • I know this title is Senior Community Manager, but given the duties of the role, I'd expect it to be Head / Director of Community (unusually, though, the compensation is pretty spot on even if the title isn't).
    • Other than the minor misalignment between duties and title, it's a thoughtfully written job description and the benefits are decent.
  • Privacy Operations Specialist ($130k-$160k) at Anthropic (Hybrid US-San Francisco, Office presence 25% of time required)
    • Why is this in Eh, It's Probably Fine when the Product Specialist role isn't? Unlike in the Product Support Specialist role, where they did a good job of putting "thrive in fast-paced, reactive situations" in the wider context of the role, they don't really explain why "thrive in fast-paced, high-volume, ambiguous environments" is a Good Fit requirement.
    • The "ambiguous environments" bit stands out to me in particular, since all of the responsibilities of the role seem pretty concrete in scope – so where's the ambiguity coming from?
  • Senior Risk Analyst, Trust and Safety ($126k-$181k, location-dependent) at Thumbtack (Hybrid US, Canada, Philippines)
    • Thumbtack is a virtual-first company, meaning you can live and work from any one of our approved locations across the United States, Canada or the Philippines. – Looooooool at this remote-baiting. Just say hybrid, my dudes.
  • Trust & Safety Data Analyst ($105k-$124k, location-dependent) at Match Group (Hybrid US-San Francisco, New York City, In-office 3 days/week)
    • Job description is thoughtful and well-written, benefits are excellent, and Careers page is clear and informative. This would be in Green Means Go except the salary range seems low for SF and NYC, especially considering they're wanting someone with a master's degree.
  • Technical Services Engineer ($90k-$176k) at MongoDB (Hybrid US-Palo Alto, CA)
    • Strangely wide salary range, although it may be because they'll consider entry-level candidates.
    • Lots of other open Customer Engineering roles at all levels, in the U.S. and internationally.
  • Customer Success Manager ($67k-$132k) at MongoDB (Hybrid US-Austin, TX)
    • Strangely wide salary range; based on the Qualifications section, this is not an entry-level role.

Tread Carefully

Didn’t quite hit bingo, but there were several yellow flags or more than one red flag.

  • Senior Manager, Moderation Operations, Trust & Safety ($211k-$252k) at Roblox (On-site US-San Mateo, CA)
    • See the notes below.
  • Knowledge Management Lead, Trust & Safety Operations ($198k-$231k) at Roblox (On-site US-San Mateo, CA)
    • See the notes below.
  • Head of Customer Operations ($160k-$220k) at Empower (Remote US)
    • Role reports to the CFO, which is an interesting choice.
    • Other, more junior roles from this company were placed in Tread Carefully for lack of salary transparency, and, as you know, it drives me crazy when senior roles get transparency and junior ones don't, so.
  • Quality Assurance Program Lead, Trust & Safety Operations ($160k-$184k) at Roblox (On-site US-San Mateo, CA)
    • See the notes below.
  • Product Support Manager, Payments ($142k-$185k) at Roblox (On-site US-San Mateo, CA)
  • Customer Service Director ($120k-$155k) at Forge Nano (Remote US)
    • This role, Customer Service Director, reports to the Director of Sales. Imagine my face when I read that.
    • Professionally represent the company to the customer and reinforce that professionalism within your team. – This is like the 3rd or 4th time they've mentioned professionalism and the emphasis on it is making my eye twitch. Kinda makes you wonder what their definition of professional is, huh?
    • Advocacy for team in resource management. – This is the second time they mention this in the job listing, which tells me CS is constantly at the little kid's table, and they're starving.
    • I'm putting this in Tread Carefully because although the product legitimately sounds cool, there's a certain old-school vibe I get from this job listing that makes me think they're looking for a clean-shaven, middle-aged white dude with an MBA from a midwestern school and a Chinos fetish. If that's your vibe, do your thing, but the rest of us should probably take care.
  • Threat Investigator, Child Safety ($105k-$162k) at Meta (On-site US-Menlo Park, CA, Seattle, Washington DC)
    • Prioritize and execute with minimal direction or oversight. – Considering the importance of this role, this seems...imprudent.
    • Considering the highly sensitive investigations required and the disturbing content involved, Meta doesn't talk at all about how they'll support the mental health of the person in this role.
  • Enterprise Customer Success Manager (No comp given) at Canva (On-site US-Austin, TX)
    • For the most part, the job description is well-written and honest, but no salary transparency means that this job goes into Tread Carefully.
    • Also, not sure how I feel about a company that describes its culture team as "Vibe."
    • Multi-page job application through SmartRecruiters.
    • There are quite a few other job openings for both Success and Customer Service in the US, the UK, Australia, and the Philippines.
  • Customer Support Analyst ("Competitive" comp not given) at Airwallex (Hybrid Australia-Melbourne, Sydney)
  • Senior Associate, Onboarding Operations ("Competitive" comp not given) at Airwallex (Remote US)

BINGO

Welp.

  • Senior Customer Experience (CX) Operations Manager ("Competitive" comp not given) at PandaDoc (Remote US)
    • You should be ready to do anything in your power to help the team perform at its highest possible level and in a way that is predictable and repeatable. – This is just a weird thing to say.
    • You should take a drink of water every time you come across the word "operation" or a derivative in this job listing. You'll be *really* hydrated.
    • A company talking about growth this much in a job description doesn't really signal good things about the health of the company, does it?
    • Build a collaborative, trusting environment where open communication helps to uncover issues for faster agreement and resolution. So the current environment is every person for themselves, huh? Cool cool cool cool cool cool cool cool.
    • Creative problem-solver and project manager who can balance multiple projects and business priorities with ease and finesse. – **nervous laughter**
    • Employees will also receive 13.34+ hours of paid time off per month. – Everything this company says in this listing just comes across as slightly off. Just...what a weirdly specific number that is.
    • Selling a product that changes the lives of our customers. All due respect, I don't think document signing is changing the lives of your customers.
    • Asks for desired salary in job application. BINGO!
  • Senior Support Specialist, Payroll ($60k-67k) at Restaurant365 (Remote US)
    • Careers page is super white.
    • You are invited to bring your whole self to work. We embrace innovation, authenticity, and autonomy to empower our employees to develop their holistic selves. Our community provides an open and safe environment for all employees to belong and bring their best selves to work. Diversity provides perspective and opportunity; our commitment is to all employees, partners, and customers in the communities we live, work and serve. – This "Commitment to Belonging" statement is some ridiculous word soup.
    • "Share positive vibes" is a stated value. No thanks!
    • Be available to assist customers after hours with Workforce issues that are preventing them from processing payroll. – I'm sorry, what?
    • 10 YEARS OF EXPERIENCE for at most $67k at a SaaS company? Am I reading that right? WHAT THE HELL.
  • Customer Success Specialist (No comp given) at QS Quacquarelli Symonds (Remote ?)
    • Not clear if it's US-Remote or Remote-Worldwide.
    • Misalignment between duties/requirements and title/seniority.
    • No mention of benefits anywhere that I can find, and application asks for desired pay.
  • Patient Service Representative ($37k-$42k) at Midi Health (Remote US)
    • I give them points for pay transparency, but deduct equal points for $18-$20/hr. That's shitty pay for someone with 5 years experience in "providing high-touch patient experience."
    • Work independently, as well as be part of the team, including accomplishing multiple tasks in an environment with conflicting priorities. – LOL no.
    • Opportunity to join a growing start-up at the ground level. – LOL no.
  • Senior Customer Service Representative ("Competitive" comp not given) at Airwallex (Remote US)
    • We’re looking for proactive, high-energy individuals who have a passion for delivering a seamless customer experience, and who enjoy working in a fast paced environment. Individuals who prove themselves will have ample room for professional growth as we continue to scale rapidly! – So many flags in a single sentence. Might be a new record.
    • We also like to ensure we create the best environment for our people by providing a collaborative open office space with a fully stocked kitchen. – This is a remote role.

Seriously, Maybe Don’t

Don't say I didn't warn you.

  • Manager, Trust and Safety ($165k-$198k) at Scale (On-site US-San Francisco, Seattle, New York)
    • DANGER WILL ROBINSON DANGER
    • From their Careers page: The age of AI is here. Generative AI has the potential to unseat incumbents, catapult new leaders, or solidify existing moats. – What a perfectly normal way to introduce your AI company.
    • Speaking of Careers pages, Scale's needs some work. They repeat whole paragraphs and talk about "Parental Support" without explaining what parental leave or other policies they actually have. Given how much they're hiring, you'd think someone would have noticed the problems before now.
    • We are growing operations rapidly, on-boarding new customers, and launching products all the time. This raises new strategic questions we need to answer as well as tactical challenges we need to overcome. – My friends. As a CX leader (which has almost always included T&S risk assessment / mitigation), I advise you to think very carefully before working in Trust & Safety for a company that routinely launches products before they've fully considered the risks and tactical challenges involved with said products. Think about the battle you'd be fighting daily, which you'd constantly be fighting uphill, and THEN CONSIDER THAT THOSE PRODUCTS INVOLVE GENERATIVE AI. (See the main story for this newsletter, ye gads.)
    • The blend of operations, process improvement, and cross-functional leadership make this a unique and exciting role that will provide an opportunity to work with multiple teams across the company and around the globe. – Okay, so why the hell is this role only at the manager level, then?
    • The requirements section is too long to quote here, but it is a fucking mess: unnecessary elitism, poorly written and repeated requirements, and requirements that are wildly misaligned with the job title.
    • Dare I say that I think this job description might have been written by AI?
LinkedIn status update from Joseph Jewell: Pro tip: There is a high probability you will be asked what your greatest weakness is during an interview. Even though this question is annoying, you need to be prepared to give a well-thought-out answer. Do not just say the first thing that comes to mind. Also, don’t say cliché answers like “getting caught up in details” or “public speaking” - be HONEST. You need to let potential employers know your TRUE biggest weakness. Tell them about your debilitating anxiety, your crippling depression, your inconsolable OCD, your problematic IBS. Walk them through each medication you take and the potential side effects. Be sure to mention the PTO you will need to take periodically to disappear into the void and recover from everyday tasks like grocery shopping.

Upcoming Events

Striking the Right Balance: Moderating Online Content During the 2024 Elections
February 6, 2024 at 1:00pm ET. LinkedIn Live session hosted by the Oversight Board, featuring Alice Hunsberger (PartnerHero), Katie Harbath (Duco), and Nighat Dad (Digital Rights Foundation). Register here.

Transform Your Interviews: Explore Evidence of Your Confidence and Reframe Success
February 7, 2024 at 12:30 ET. Part 2 of 2-Part Virtual Workshop hosted by Support Driven, featuring Peter Harrison (Zapier). Register here.

Customer Service Trends 2024
February 7, 2024 at 12:00pm ET. Webinar hosted by Intercom, featuring Matt Dale (Moxie CX), Allie Talavera (AppFolio), Jared Brier (AKKO), and Bobby Stapleton (Intercom). Register here.

Why making your support metrics public is worth the risk
February 8, 2024 at 2:00pm ET. Webinar hosted by Front, featuring Parker Conrad (CEO of Rippling). Register here.

HiveMind - AI vs Human: The Future of Customer Service in 2024
February 8, 2024 in Boston, MA. Featuring Craig Stoss (Director of CX Transformation at Partner Hero), Jason Skinner (Founder, CXRefinery) and Kat Gaines (Senior Development Advocate, PagerDuty). Register here.

ElevateCX London Happy Hour
February 13, 2024 at 6:00pm, The Cocktail Club Old StreetRSVP here.

Messaging Malware Mobile Anti-Abuse Working Group (M3AAWG) 60th Annual Meeting
February 19-22, 2024 in San Francisco, CA. Register here.

Mastering Connections: The Art of Building and Nurturing Client Relationships for Lasting Success
February 20, 2024 at 12:00pm ET. Fireside chat hosted by Support Driven, featuring Alex Canedo (Consultant) and Jenny Dempsey (Consultant). Register here.

Gladly Connect Live 2024
March 25-27, 2024 in Scottsale, AZ and virtually. Register here.

Support Driven Leadership Summit
March 26-27, 2024 in San Diego, CA. Register here.

Write the Docs Portland
April 14-16, 2024 in Portland, OR. Register here.

ElevateWomen 2024
May 29-June 1, 2024 in San Antonio, TX. Call for speakers open now.

Support Driven Expo
May 14-15, 2024 in Las Vegas, NV. Call for proposals open now.

ElevateCX Fall 2024
September 26-27, 2024 in Denver, CO. Call for speakers open now.


LUCYANA @LUCYANARANDALL the idea of having tattoos making it harder to get a job is so bizarre bc when i see someone with a lot of tattoos i don't think "degenerate", think "nice, a guy who schedules lots of appointments and shows up to them on time"

  1. An investigation by Pindrop found that the AI-generated Biden voice was created using ElevenLabs’ AI text-to-speech engine. You may recall that I played Bad Job Bingo with one of ElevenLabs’ jobs last week, during which I also called their product a dystopian nightmare. Sucks to be right. ↩︎

  2. I’m never calling it X. ↩︎


That's it for this week! If you have items for the Roundup you'd like to submit, you can do so at roundup@supporthuman.cx, but be sure to check out the Roundup FAQs first.


All of Support Human's content is free forever for individuals. You can power this content with a coffee, by subscribing, and by sharing to your networks! Any support is welcome and hugely appreciated.
Written by
Steph Lundberg
Steph is a writer and Support leader/consultant. When she's not screaming into the void for catharsis, you can find her crafting, hanging with her kids, or spending entirely too much time on Tumblr.
Comments
More from Support Human
30Jun24 Roundup: The Music vs. The Machines
Roundup

30Jun24 Roundup: The Music vs. The Machines

These aren't small groups of fans trading fanfiction and fanart about their favorite ships and characters – these are endlessly ravenous robots whose appetites for intellectual property have the potential to destroy the livelihoods of artists from across the spectrum of creative life.
Table of Contents
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Support Human.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.