Let’s start by recapping what Alexa does. It listens locally for a “wake word,” which says you’re about to give it a command. By default the word is “Alexa,” but you can change it to a small number of alternatives. It sends what you say next to a server to process your request, and it sends back a spoken response. There may be side effects, such as ordering a product.
The adoption of cloud-native applications is predicted to double by 2020. Read our 8-Steps Guide To Being Cloud-Native to learn its implications.
Your request history is stored. Snippets of things you say within its range are kept on its servers. They could include ones you didn’t intend as requests, where you said “Alexa” or something that sounded like it.
Don’t confuse Alexa with Alexa.com (aka Alexa Internet), which also belongs to Amazon. It’s a website ranking service which is entirely separate from the conversational software.
- Amazon will release personal information “when we believe release is appropriate to comply with the law.”
- You can use third-party services through Alexa. It will give information about you to the service, and what it does with it is outside Amazon’s control. For instance, if you use it to make phone calls or send SMS messages, they may go through a separate service, and it will see your phone number.
- You may only make “personal and non-commercial use” of the service.
- Alexa “processes and retains” your interactions. You can delete your voice recordings one at a time.
- Capabilities, called “skills,” belong to specific developers. A request may send information to the skill’s developer. Amazon gives the example of sending your Zip code when you request a weather report.
- Prime members can make purchases from Amazon through Alexa. Users can require a confirmation code or turn to purchase off. In some cases, purchases can be made from third parties through a skill. Refunds for accidental purchases are available in some cases.
If Amazon gets a bona fide warrant for relevant evidence in a criminal investigation, it will likely comply. However, it pushes back against overbroad requests.
The Bentonville, Arkansas, police demanded information on a suspect’s Alexa interactions. There didn’t seem to be a specific reason to believe they would provide incriminating information beyond the presence of an Amazon Echo in the suspect’s home. Amazon refused to release the data. However, the suspect allowed the police to hear it, making the objection moot. The case was ultimately dropped for lack of clear evidence that the death was a murder.
Amazon’s policy is to store only requests for an interaction. Without knowing the internal details of Alexa’s operation, it’s impossible to see if it can turn an Echo into a full-time surveillance device and whether it might do that under legal compulsion. It certainly stores some chance bits of conversation due to false triggering of the wake word. Stored recordings might include identifiable voices, calls to persons of interest, or suspicious requests.
While Alexa is a US-based service, it may use servers outside the US. Governments with fewer privacy restrictions than the US may be able to demand access to the servers.
The Fourth Amendment to the US Constitution prohibits “unreasonable searches and seizures.” It might seem to be the main protection against sweeping demands for information on servers. Amazon, however, chose to address the matter as a First Amendment issue. It argued that people have “the right to receive, the right to read, and freedom of inquiry.” It also establishes that some of the content it provides is “constitutionally protected opinion.”
The key point is that “the fear of government tracking and censoring one’s reading, listening, and viewing choices chills the exercise of First Amendment rights.” Because of this, a “heightened standard” of relevance and need is necessary when demanding records of this type. People might, for example, make embarrassing purchases through Alexa. Having this information too readily available to the government (and from there to the public) would intimidate them.
Amazon allows only personal and non-commercial use of Alexa. It’s not clear what this means for its use in a professional environment. In any case, using it where there’s a strong expectation of confidentiality treads on dangerous ground and could expose the user to liability. Someone who steals your password would have access to your past interactions.
Use of Alexa in a law office could damage attorney-client privilege. Information shared with a third party generally isn’t protected by the privilege. It might be possible to demand the data with a warrant. It isn’t clear whether a secure server forms a third party in this sense.
Healthcare professionals need to know that Alexa isn’t HIPAA-compliant. Having Alexa activated during a discussion of protected health information (PHI) could expose a provider to significant liability if the information is breached. It also means that Alexa skills directed at processing PHI aren’t allowed.
The Need for Caution
The legal issues are mostly unsettled, and the safest course is to be cautious about using Alexa. Turning it off when starting a seriously confidential discussion is simple prudence. Offices that routinely engage in such discussions shouldn’t have it present at all. Users should review their settings and turn off features which they don’t expect to use.
The risk isn’t high for most people, but things can go wrong. People have to decide how much risk they’re comfortable with, weighed against the convenience. Even talking with the windows open carries some risk of losing privacy, so this isn’t an entirely new issue. Where privacy is a major concern, though, people close the windows, and they should take similar precautions with electronic listeners.
The adoption of cloud-native applications is predicted to double by 2020. Read our 8-Steps Guide To Being Cloud-Native to learn the implications of it.