Lawyer jobs being taken by robots is a popular media theme these days. However, after having attending the South by Southwest (SXSW) technology conference last week, I think the robots may, in fact, be needing us. In the next few years, technology will take us to a level of complexity that is almost unimaginable. That complexity will stretch our existing legal frameworks and require highly skilled lawyers to navigate outcomes.

SXSW is one of the most important technology events on the global calendar. More than 75,000 delegates descended on Austin, Texas, to hear over 5,000 speakers ranging from IT company execs, Hollywood celebrities and no less than six 2020 presidential candidates, including Starbucks CEO Howard Shultz. The big tech companies pay millions to completely transform sleepy restaurants into "Experience Spaces" to capture the eye of SXSW's young "influencer" crowd. In fact, the event has been called the "Millennials' Woodstock."

The big theme for the event was the ubiquity of artificial intelligence. Eminent futurist Amy Webb was asked at the end of her session on tech trends — attended by over 2,000 people — why she had not mentioned AI. She said that the answer was simple: "AI is in everything and every industry, now and in the future."

AI was everywhere at SXSW — in sessions ranging from health care, transport and finance all the way to storytelling. Concern about bias in AI was widespread as well, with at least six sessions dealing with this topic in part or in whole.

The examples of bias in AI seem to be proliferating. Last year, MIT released research that found facial recognition software had a 34 percent error rate for darker-skinned women compared to 1 percent for lighter-skinned males. Google Inc. made the news recently for removing gender-based pronouns from its "smart compose" Gmail technology because of gender bias risk.

There have been allegations of racial bias in algorithms used by judges to determine the likelihood of recidivism as well as in AI used by police to predict where crimes might happen. The press has reported that a large technology company stopped using an AI-enabled system to assist with recruitment when the system was found to be biased against women.

To be clear, a machine itself is not biased. Problems arise when the bias of the programming team or the bias that is inherent in the encoded business rules or data sets is passed into the machine's system. Bias within an organization can have significant legal repercussions in a variety of ways, and organizations will not be able to simply blame a machine for any bias displayed in its behavior.

It is critical that bias be eliminated from AI systems. Some suggested technology fixes involve having an algorithm that polices other algorithms to limit bias. One solution offered at SXSW was that the algorithm should be developed by diverse programming teams. This fantastic idea should be a prerequisite for all development teams, not just to avoid bias but also to get better outcomes.

When YouTube first started, 10 percent of its videos were loaded by users the wrong way. Nobody could understand why until it was realized that everyone on the development team was right-handed, and had not thought about how left-handed people would hold a camera differently when recording videos.

AI causes another issue for lawyers. When companies make decisions that may be challenged by regulators or plaintiffs, companies may need to explain why a particular decision was reached. This is hard to do if the decision was, in fact, made by an AI. The early days of AI used "black box" technology, which meant that it was not possible to determine how the AI had made a decision or what data it had used. Newer AI applications are moving toward having auditability of the decision-making process.

One of the more challenging propositions at SXSW came from Nick Polson, author of "AIQ: How AI Works and How We Can Harness its Power for a Better World." Polson, like many of the technologists at the event, was of the view that bias in AI will ultimately be resolved and is not a major issue. He did, however, think that interpretation of AI results will ultimately fall to a new professional group. In the same way that lawyers interpret laws, there would be a new profession specializing in interpretation of outcomes produced by AI systems.

AI-powered voice assistants like Siri have caught on with astonishing speed. More than 120 million smart speakers have been sold in the U.S., while virtually every appliance manufacturer and car maker has implemented or plans to implement voice-operation. Samsung will include its Bixby in all devices by 2020. A large tech company has released the first of its household appliances, the voice-operated microwave. For those who want to really embrace voice-controlled everything, a tech company has joined with the largest home builder in the U.S., Lennar Corp., to pump out purpose-built connected homes by the hundreds.

Voice-tech enables what is known as persistent recognition systems. You are always being monitored in your home and elsewhere. This creates some significant legal issues, not the least of which is how to respond to access requests by law enforcement and litigants to the information recorded. Imagine a world where your toaster gets subpoenaed to corroborate evidence your connected crockpot overheard. It also allows what is known as behavioral biometrics, where companies can determine your emotional state.

For example, one major tech company has filed a patent that will enable its voice assistant to assess your voice to determine if you are sad or sick. A large retailer has a connected cart patent that detects stress levels and can inform a shop assistant if you appear to be upset and need help finding something. Kia Motors Corp. is working on a car that changes the interior condition of the car in response to your emotions. This technology could put an end to road rage, as soothing lights and a seat massage kick in when you get angry on the streets.

What is the law related to companies knowing what is happening inside our heads? What is the governance framework around such bio data? Who owns it? What can companies do with it? Can it be sold or ported to other companies?

This bio data extends to your DNA, which has become very relevant as the cost of sequencing the human genome has come down from its original $100 million price tag in 2001 to under $1,000 per person. Also, with CRISPR technology, it is possible to edit your genome, as purportedly happened in China a few months ago when a scientist claimed to have created the first genome-edited baby. Law enforcement is in favor of universal genetic databases storing everyone's DNA. What are your rights with respect to your DNA, and how can you restrict the ability of others using it in some way?

This massive focus on data led Amy Webb to pronounce confidently that one of the major trends is that "Privacy is dead." This was echoed by one of the more extraordinary speakers at SXSW, Roger McNamee, founder of venture capital fund Elevation Partners, a true Silicon Valley insider and an early investor in Facebook Inc. McNamee was there to talk about his new book, "Zucked," in which he criticizes Facebook and other big tech companies for tracking us and then using the information to modify our behavior or profit from its sale to third parties.

The theme of loss of trust in big tech was addressed in a number of sessions, reaching its peak with a speech by Sen. Elizabeth Warren, D-Mass., as the presidential candidate gave a campaign promise not only to split the big tech companies where they both operate a marketplace and are a participant in that marketplace but also to reverse some of their major acquisitions, such as Facebook's purchase of Instagram Inc.

Given recent sweeping privacy reforms in Europe and now in California, I do not agree that privacy is dead. I think privacy laws have an important role to play in protecting our rights with respect to our personal information, as increasingly complex systems make surveillance and behavior modification more pervasive.

Autonomous vehicles have come a long way in a short time. Drive.ai has a fully autonomous fleet operating commercially in Dallas right now. According to experts in the field, while advances are being made in the U.S., they see other countries with different road laws making greater progress.

SXSW speaker Malcolm Gladwell, author of books such as "The Tipping Point" and "Outliers," said that he was not a huge fan of self-driving cars, especially given the cybersecurity issues. His point was that these cars would likely cause traffic fatalities in the U.S. to drop from their current high of 40,000 per year, but he questioned if we would be prepared for significant deaths that result from hacking. Would society be able to cope with 1,000 people dying on one day due to a terrorist hacking people's cars and causing them to accelerate wildly? For a lawyer, the questions are myriad, not the least of which is where liability for a range of issues falls.

Finally it seems like we are destined for a tech-induced fork in human evolution. A neuroscientist, Heather Berlin, discussed how there is currently tech available in market that allows disabled people to move prosthetic limbs using only thoughts. It will only be a few years before this technology becomes available to the mainstream. What are the implications of a world where you can simply buy a neural implant to triple your child's memory? The Transhumanist movement is dedicated to augmenting people with technology. What are the implications for our laws, our ethics and our humanity when it is possible to create superhumans in this way?

It is not possible to say with clarity what the future of technology holds, but it is certain that it will be very complex, and lawyers have an important role to play in helping navigate this complexity.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.