Web Application Security Requirements for Google Providers

This document describes the baseline security controls that web applications provided by Google partners must comply with. In order to prevent security issues, security controls must be carefully designed and should be regularly tested for their effectiveness.

Note - Web sites built specifically for Google must also adhere to the Outsourced Development Requirements



The basic web application requirements are:

  1. Secure the web environment (prevent web server bugs)
  2. Validate user input (prevent XSS and injection attacks)
  3. Avoid third-party scripts and CSS
  4. Use encryption (protect data, prevent mixed content bugs)
  5. Use the right authentication
  6. Authorize requests (prevent XSRF, XSSI etc)
  7. Content Security Policy
  8. Appendix

If any of these requirements cannot be implemented, please contact your primary Google point of contact for escalation to the Google security team.


Google regularly analyzes security bugs to identify common mistakes and good design practices. Experience has show the following recommendations help prevent vulnerabilities and avoid re-work or delays.

The Google Security Team may identify additional controls, as needed, for sensitive operations or data, or in response to foreseeable threats and vulnerabilities.

Software development is not a perfect science and vulnerabilities will occasionally be found. In such situations the provider must engage with the Google security team to identify appropriate fixes, negotiate timelines, and maintain adequate security.


1. Secure the environment

Depending on the nature of your project the application will be either hosted in your datacenter and on your servers, or on Google infrastructure. In both cases there are some requirements and best practices that help ensure secure operations.

For Software-as-a-Service Applications

A web application is only as secure as the environment it operates in. This means that the security of all the application's dependencies must be ensured. Common dependencies of web applications include:

  • the web application framework it is based on
  • the web server and modules used by it
  • the underlying operating systems
  • network components on the way between the user and the application
  • the storage layer used by the application
  • middleware systems

In addition, other factors such as systems running near the application (management hosts or other web servers, for example) or other web applications that are hosted on the same server, may influence and affect the security of the application provided to Google. The security program and operating procedures used around an application may also have a big impact on its security.

This document cannot describe requirements for every possible dependency. It is expected that the application is operated in a secure environment, implementing industry best practices (e.g. ISO 27001, PCI-DSS, DISA STIGs or NIST checklists and guidelines) and vendor recommendations (e.g. Microsoft Security Guidance, Oracle Hardening Guides, etc.).

At a minimum, the software used around the application must be up-to-date, and there must be no known vulnerabilities that can be patched. A robust vulnerability management process must be in place to ensure the prompt identification and remediation of systems that are affected by known vulnerabilities or misconfigurations. Further, the systems must be appropriately hardened, following the principle that anything that is not required should be removed.

For Outsourced Software Development

Please see our Outsourced Software Development Requirements for detailed requirements that apply to outsourced software development.

2. Input Validation

Anything that is transmitted from the browser to the application can potentially be manipulated by a malicious actor. As such the application should always assume that any user input is, in fact, malicious. It is a common misconception that input received from cookies, hidden form fields or drop down boxes cannot be changed by an attacker. Everything in an HTTP request can be modified, thus stringent checks of all input are required (see the section on serialization issues for further issues to consider).

Furthermore anything an application obtains from outside its trust boundary may also be malicious. For example, information written to a database by a different application may not have gone through the same stringent sanitization routines that the application itself applies. Therefore it is necessary to validate all input an application receives from any system outside its trust boundary.

The following section describes some common vulnerabilities that are caused by insufficient input validation. This list should by no means be deemed exhaustive. Developers must ensure that input cannot change the way data or code is interpreted, regardless of context.

SQL Injection

An application is vulnerable to SQL injection in the context of a database query when some of the user input is interpreted by the database as being part of the query structure. This often results in the attacker being able to redirect the application flow, read from or even write data directly to the database.

There are many ways to avoid SQL injection vulnerabilities, including escaping user input, using parameterized queries, using stored procedures or ORM frameworks. It is recommended to use one approach consistently throughout the application. Whatever method chosen, the application must make sure that input received from outside the application's trust boundary cannot modify the way the query is interpreted by the database.

One of the best ways to avoid SQL injection is to actually avoid using SQL at all in your project, e.g. by instead relying on an Object-Relational Mapping (ORM) framework (such as the one for DataStore supplied on AppEngine).

XPath Injection

If an application uses XPath to query server-side data and uses input received from outside the application's trust boundary to form the query, the application must make sure that the input cannot modify the way the query is interpreted by the data backend.

LDAP Injection

If an application queries an LDAP server and uses input received from outside the application's trust boundary to form the query, it must make sure that the input cannot modify way the query is interpreted by the LDAP server.

Command Injection

Command injection is one of the most serious vulnerabilities an application may suffer from as it allows an attacker to execute arbitrary commands, typically with the same privileges the web server has. We strongly recommended against using user input to form commands that are executed at the operating system level. If unavoidable, the application must make sure that it is not possible for an attacker to modify the command line in a way that allows users to specify additional commands or modify the command the application intends to run.

Path Traversal

If user input will determine the filename an application operates on, controls must be in place to ensure that an attacker cannot modify the filename in a way that would allow an attacker to read from or write to unintended files. For example, consider an application that allows the user to retrieve a file they previously uploaded. The URL might look like this:


If this application was vulnerable to path traversal attacks, a user might get confidential system files by walking down the path using ../:


Writing to Files

If the application allows users to upload or write files, it must ensure that this cannot compromise the security of the server or application. In particular, the application should appropriately separate user-manipulated files from other system and application files, and prevent execution or misinterpretation. For example, an ASP.net application allowing the user to upload a file must ensure that users cannot upload .aspx files in a way that they would be executed by the web server.

Privacy-relevant metadata (e.g. geolocation data within images) should be removed from these files.

Cross Site Scripting (XSS)

Cross Site Scripting (or XSS for short) occurs when an application redisplays insufficiently sanitized user input in the context of the application's origin (as defined by the Same Origin Policy). If the user input contains certain kinds of scripting code that is interpreted by the user's browser, it may read or alter the DOM of the current page when redisplayed. In many cases, XSS is used to steal users' cookies, but it may also be used for phishing attacks, or even to deface the web page. Unfortunately, XSS is one of the most common security issues in web applications, and due to browser quirks and other unexpected factors quite hard to get right.

In order to work around these factors, the application must take the following precautions:

  • either escape or sanitize user input that is redispayed by the application
    • if escaping is chosen, replace any characters that might be used to inject HTML or scripting code (e.g. characters such as <, >, ", ', \, …) and replace them with values that are harmless in the context they are used in (for example, different substitutions have to be used in javascript context (e.g. > becomes \u003e or \x3e) than in HTML context (e.g. > becomes &gt;))
    • if sanitization is chosen, it is strongly recommended to use a proven library, such as Caja. HTML sanitization is incredibly hard, so it should only be used when necessary.
  • set a valid and appropriate content type for each page (in the Content-Type HTTP header)
  • set a valid character set for each page (in the Content-Type HTTP header)
  • do not allow the beginning of any file to be under the control of the user
  • consider hosting user supplied files in a different origin, taking into consideration the potential need for authorization
    • Content-Disposition: attachment headers should be used where appropriate

We strongly recommend an escape-on-output approach where the escaping or sanitizing step is performed each time user input is redisplayed, as opposed to when it is stored. This allows easier fixes when vulnerabilities are discovered in the escaping or sanitization routines.

One way to do this consistently throughout the application is to use an auto-escaping templating language (as opposed to manually escaping in code). A few examples for different programming languages are listed below. Note that not all of these are context sensitive. So it may still be required to do manual escaping (or explicitly tagging a variable in the template) when the context is something other than HTML.

Note - Mixing Client and Server-side templating can result in unforeseen security vulnerabilities. Output from the server-side templating might result in valid expressions that are evaluated by the client-side templating system. By injecting valid client-side expressions an attacker can run malicious javascript on the webpage.

Given there is no server-side template system which understands contextual escaping for popular client-side frameworks (AngularJS, ReactJS, Polymer, EmberJS, VueJS...), these designs have inherent security risks. As such, UI element rendering should strive to be solely in the domain of the client or server, using REST-ful architectures as appropriate.

There are also XSS vulnerabilities that work exclusively on the client-side. This kind of XSS is often referred to as DOM based XSS. Standard server-sided escaping of user input does not help with this type of issue as the root cause is in the client-side code. The application must therefore also make sure that whenever client-side scripting code handles user input, or parts of the DOM that may contain user input (such as document.location), it does not introduce XSS vulnerabilities.

XSS vulnerabilities may also surface through file uploads. In many cases it is better to serve uploaded files from a separate, cookie-less domain, and let the same origin policy take care of protecting against attacks. Note that this implies that all uploaded files will be available without authentication, which may not be appropriate for all situations. In case the application needs to protect the files from unauthorized access, additional controls must be implemented to ensure other users cannot be attacked through XSS in uploaded files.

Integer Overflows

When using user provided numbers in arithmetic operations, the application must take care and account for possible integer overflows.

As an example, consider a web store that uses a 16 bit signed integer to hold the price. A 16 bit signed integer can hold a maximum value of 32767. If an attacker tries to buy 100 items with a price of $600 each, the total price would be $60000. However since the variable holding the value is a signed 16 bit integer it would overflow, with it is actual value being $-5536.

XML External Entities

XML has a lesser known feature that permits entity aliases (for example &customEntity;) containing external data such as file or URL content. If an application handles XML input from a system outside its trust boundary it must ensure that external entities referenced from the XML document are not resolved, as failing to do so may result in attackers being able to disclose local files or retrieve URLs from the local network.

How to disable external entity loading in:

Logic Errors

The application must perform logic checks where appropriate. For example, an application must check and enforce that a user cannot buy a negative number of items from a web store, or that it is not possible to transfer a negative amount of money to another person's account.


There are many risks involved with various approaches to serialization and deserialization of data that may be influenced by potentially hostile users. Common flaws that your application needs to defend against may involve:

  • remote code execution via powerful serialization APIs (e.g. Python’s pickle API)
  • mass-assignment style vulnerabilities when dealing with other serialization methods such as JSON, YAML, Protocol Buffers, etc.

To avoid security issues, you should:

  • use libraries and APIs that only deserialize primitive types (e.g. int, string, byte array)
  • not blindly store a serialized representation submitted by client (instead, take a approach where only specifically defined fields are propagated to the internal data model after validation)
  • use protection mechanisms such as encryption or signing when exchanging serialized data representations with other services

Other Vulnerabilities

Other input validation related vulnerabilities web applications may need to protect against include:

3. Third Party Content

Loading content from other sites is dangerous under certain circumstances since a security issue in a third-party site might also affect your application.

To avoid this problem it is not permissible to load scripts or style sheets (e.g. via <link rel="stylesheet" href=…> or <script src=…>) from any third-party site that is not Google owned and operated. By inserting malicious code into the script or style sheets, whoever has control over the servers from where those resources are loaded will also have full control over your site.

Similarly, directly embedding applets, videos, frames, or images (including advertisements, tracking pixels, etc.) from third-party sources is also dangerous, as loading these resources can leak information. For example, the referer header may reveal where in the application an external resource has been loaded from, which may be both a privacy and a security issue.

When the use of third-party libraries is unavoidable, these resources must be sourced locally and not loaded from an any third-party site that is not Google owned and operated. Care should also be taken to ensure that the third-party libraries are :

  • the latest stable version available
  • actively supported (not deprecated or relying on deprecated functionality that may cause issues with ongoing support)
  • In instances where the application requires to communicate with another service or API to retrieve/send data, these communications must be completed over an HTTPS connection (including suitable validation of the SSL certificate). The use of unencrypted HTTP communications or unvalidated HTTPS connections (e.g. with services delivering self-signed or untrusted certificates), even for public data, is not permissible.

    4. Encryption

    For an attacker it is often extremely easy to listen in on the packets as they are transmitted between a user and the web application (for example when the user is on a public WiFi network). In order to avoid having sensitive data read by an attacker while it is in transit, any application that allows users to log in, or contains anything but public data, must be available solely over HTTPS. Applications developed specifically for Google must always be SSL-only (including any inter-server communications that occur on the backend). A web server listening on port 80 (plain HTTP) and redirecting users to the SSL version of the application is fine, and can make it easier for users to access the application.

    There are certain encryption ciphers and key lengths that are deemed insufficient. To protect against related attacks the web server should be configured to support only TLS 1.1 or newer, and should only accept secure ciphers with strong key lengths (>=112 bits). Note - some old versions of Android do not support TLS1.1, please assess the impact of this on your project before implementing.

    To avoid someone impersonating your web server, it must identify itself with a valid certificate signed by a trusted certification authority. The CA's public key must be installed by default in the certificate store of common browsers (Chrome, Firefox, IE, Safari).

    The use of HTTP Strict Transport Security (HSTS) headers is strongly encouraged.

    Another important scenario to consider involves sensitive information that is stored in browser cookies: If cookies are not set with the “secure” flag, an attacker can inject a reference to an HTTP IFRAME on any site (even one unrelated to your application) visited via HTTP, and the contents of these cookies will be sent. To avoid this problem cookies must have the secure flag set, all HTTP requests should redirect to HTTPS and resources should be referenced via a schemeless protocol (for example, <script src=“//foo.com/bar.js”>).

    Mixed Content

    Even if an application is only available through HTTPS, it may be vulnerable to attacks by including resources (most importantly JavaScript files) from other servers over plain HTTP. This defeats the purpose of SSL, and therefore the application must make sure that there are no resources included from plain HTTP sites. Typically browsers will help identify cases where resources from non-SSL sites are included by displaying mixed content warnings.

    Encrypting data at rest

    The use of strong cryptography within web applications is encouraged. However, the application must make sure that appropriate cryptographic algorithms are selected and only used within their specifications, and for their intended use. Applications must not use self-designed or modified algorithms for protecting the confidentiality or integrity of Google data or Google customer data. In addition, use proven and thoroughly tested cryptographic libraries, instead of implementing algorithms on your own.

    Key Management

    Whenever an application uses public key cryptography (regardless of whether it is for encryption or signing), the application and/or the operators must follow secure key management practices. Private keys must only be accessible to authorized persons and/or programs.

    Key Rotation

    An application using cryptographic keys must have the ability to rotate those keys. If a key was leaked or has otherwise become unusable, it should be possible to move to a different key without requiring significant effort.

    5. Authentication

    In many cases, the application or the data in the application should not be public. In order to control access many applications ask the user to log in. The requirements in this section apply to all applications that require users to prove their identity to the application.

    Applications for Google Employees

    If an application is going to be used by more than just a few Google employees, it must integrate with the Google internal authentication mechanism. This makes sure that Google employees do not inadvertently enter their Google account password, thereby sharing it with a third-party. In addition, employees who leave the company should no longer have access to Google data, and disabling their corporate Google account should terminate access to third-party applications as well.

    Outside of AppEngine, the best way to fulfill this requirement is to use OAuth2 Login. The preferred way to integrate OAuth2 Login into a web application is through the use of the Google Identity Toolkit which supports multiple programming languages. However there is a variety of other open source or free libraries available as well (good and proven libraries include DotNetOpenAuth (.NET), OpenID4Java (Java), and janrain (Python)). Companies that provide services to reduce the complexity of implementing OAuth2 Login include Ping Identity, Janrain, and Gigya.

    Applications built on AppEngine may integrate even easier using the Users service which federates with Google accounts. If you are using an application framework like Django, there is also community support for AppEngine integration.

    If an application requires integration with Google+, it can also integrate with the “login with g+” API.

    Applications that will be deployed on servers internal to Google and do not require support for multiple roles may also integrate with the Google single sign on mechanism using an Apache module.

    If an application will only be used by a small number of Google employees (<25), it may use its own authentication mechanism (though support for OAuth2 Login is still preferred). In such a case the application must ensure that:

    • the password is only stored encrypted in a non-reversible format, using a secure cryptographic one-way hash function of a salt and the password,
    • the application allows the user to change the password,
    • it enforces a secure password policy (minimum length, characters, etc.),
    • the application displays an eye-catching warning to users not to use their Google account password,
    • user accounts can be disabled quickly in case an employee leaves the company.

    OAuth2 Login Implementation Requirements

    When supporting authentication through OAuth2 Login, the application must make sure that Google is the only IdentityProvider (IdP) that is authoritative for @google.com, @gmail.com and @googlemail.com email addresses.

    The only scopes that should be requested from the user are the ones necessary to uniquely identify them. Typically, those are:

    • https://www.googleapis.com/auth/userinfo.profile
    • https://www.googleapis.com/auth/userinfo.email

    Applications for Google customers

    Whether or not an application intended for Google customers must integrate with the Google authentication framework (Google Accounts) should be determined on a case by case basis, depending on factors such as branding, the information shared by the users, etc. If it is decided that an application must integrate, methods similar to the ones described in the above section are required (OAuth2 Login, Login with G+). In almost all cases OAuth2 Login integration will be required.

    If an application does not need to integrate, the following requirements apply:

    • stored passwords must be hashed, using a secure cryptographic one-way hash function of a salt and the password,
    • the application must allow the user to change the password,
    • the application must enforce a password policy (minimum length, characters, etc.), that is defined in agreement with Google,
    • the login page must be clearly branded as the vendor's, and not as Google. The Google logo should not appear anywhere on the page,
    • the login page must not try to imitate the look of the Google login page.

    Authentication Cookies and Sessions

    In order to remember a user after login, web applications typically use a session ID stored in a cookie to match individual requests to specific users. If the application uses cookies to store the session ID, the cookie must have the HttpOnly and Secure attributes set. In any case, the session ID should not be transmitted as part of the URL, as URLs are retained in the user's history, and may be logged by the web server or proxies in between.

    There are many ways to construct session IDs. If the application elects to use a random string or number, it must ensure that the ID has enough entropy to keep an attacker from being able to guess it using brute force. Furthermore, the session ID must be generated using a secure cryptographic pseudo random number generator (PRNG), that does not allow the state of the generator to be recalculated from its output.

    Sessions should only remain valid for a certain amount of time. After a longer period of inactivity, users must be required to re-authenticate. Such a session time-out depends on the nature of the application, but should generally be somewhere between 30 minutes and 24 hours. To avoid attacks where an attacker who gained access to a session ID once replays the same session ID over and over, the web application must make sure that the session ID is actually invalidated, as opposed to simply deleting or overwriting the cookie in the client's browser.

    In addition to session time-outs, applications must provide the user with a way to manually end a session. Typically, applications provide a "log out" button or link for this purpose. Similarly to a session time-out, the application must invalidate the session ID once the user elects to end the session. Note that if an application uses something other than a random session ID (such as a signed cookie), it may prove difficult to invalidate the session ahead of its timeout. This must be accounted for by the application, for example by blocking session IDs of users who logged out of an application prior to the session time-out.

    6. Authorization

    In many cases, different users should have access to different data sets. For example, in most authenticated applications only the currently logged in user may change profile data such as name, email address or the account password. All authorization controls must be enforced on the server-side.

    Web applications providing multiple roles must also make sure that users do not perform unauthorized actions by loading pages that should only be available to users of a different role. For example, a page “admin.html” should only be accessible to members of the “admin” role and not to members of the “regular user” role. When pages are shared between different roles, but should provide different functionality based on user role, the application must take special care to allow only actions that are appropriate for the role of the currently logged in user. Hiding a page or control does not constitute access control and all authorization checks must be performed both on read and write functions.

    Applications with complex role separation should take care to fully document the available roles, alongside information on which roles are authorized to perform given actions (e.g. view, edit, delete) within the application. The ACL code should be centrally implemented and easy to audit for correctness based on the project documentation. Tests should be implemented to ensure that these ACLs accurately prevent unauthorized access.

    Cross Site Request Forgery (XSRF or CSRF)

    Applications must prevent an attacker abusing the privileges granted to a user by protecting all authenticated state changing actions against Cross Site Request Forgery (XSRF). In this attack, a malicious actor forces their victim to send a request to the vulnerable application. This is often achieved by luring the victim to a page under the attacker's control. As the browser automatically attaches relevant authentication cookies to requests sent by the user, the request will appear to come from the authorized user if they are logged into the application.

    For example, consider an online banking application that has a feature to transfer money to another account. The URL to do such a transfer could look something like this:


    If an attacker manages to lure their victim onto their site, they could include HTML that causes such a request to be sent:

    <img src="https://bank.example.com/transfer.html?dest_account=666&amount=99.90&submit=true">

    If the user is logged into their online banking portal, the application will receive that request, and check for authentication cookies - which will be present, since the request was sent from the authorized user's browser (only from a different tab).

    To protect against this attack, the application must secure all authenticated state changing actions with XSRF tokens. These tokens must:

    • be bound to the user they were generated for,
    • expire after a certain amount of time (<24h),
    • be passed to the user in a way that they cannot be read by an attacker,
    • be sufficiently long and unpredictable to not be easily guessable.

    Cross Site Script Inclusion (XSSI)

    Many web applications use AJAX to exchange data with the application. A common format for data exchange is JSONP which can be interpreted as JavaScript by the user's browser. Unfortunately, this may lead to Cross Site Script Inclusion (XSSI) vulnerabilities, as the javascript can be included from a different origin, and any variables set there can be read.

    As an example, consider a contact management application, that transmits the user's contacts in a JSON file (contacts.js):

    var contacts = {"name": "John Doe", "address": "jdoe@example.com", ... }

    An attacker, can now include this script in their own site, and when a user visits the attacker's site while being logged into the contact management application, the attacker will find a variable "contacts" that contains all of the victim's contacts:

    <script src="https://contacts.example.com/contacts.js"></script>

    In order to protect against XSSI, an application must:

    • not use JSONP for communication of non-public information,
    • not use any other format that sets variables or calls functions with non-public information,
    • not use an Array serialization format (e.g. [ ["John D", "jdoe@example.com"], … ])


    • protect transmission of such data with an unguessable, non-predictable token (must fulfill the same requirements as XSRF tokens described above),
    • require POST requests


    Depending on the nature of the actions that can be taken in the application, it may be necessary to protect against Clickjacking.

    If there is no requirement to frame a web page, the application should send the following header:

    X-Frame-Options: SAMEORIGIN

    The header tells the browser to not render the page if it is being framed by a page from a different origin. Unfortunately, some older browser do not yet understand this header, leading to somewhat incomplete mitigation. However, for applications that will be used only by Google employees, sending the header is sufficient.

    Applications that are provided to external Google customers should evaluate the need for further protection, and must document the decision for review by Google.

    7. Content Security Policy

    If you develop software specifically for Google implementation of Content Security Policy is required. In all other cases it is highly recommended.

    CSP is a defense-in-depth mechanism web applications can use to mitigate a broad class of content injection vulnerabilities, such as cross-site scripting (XSS). This goal is achieved through a declarative policy that lets the authors of a web application inform the client about the sources from which the application expects to load resources. Almost all major browsers support some version of CSP.

    Suggested policy

    Three frameworks that implement CSP out of the box are:

    Content Security Policy can protect your application from XSS, but in order for it to be effective you need to define a secure policy. To get real value out of CSP your policy must prevent the execution of untrusted scripts; below we describe how to accomplish this using an approach called strict CSP. This is the recommended way to implement CSP for sites developed specifically for Google.

    A production-quality strict policy appropriate for many projects is:

    object-src 'none';
    script-src 'nonce-{random}' 'unsafe-inline' 'unsafe-eval' 'strict-dynamic' https: http:;
    base-uri 'self';
    report-uri https://csp.withgoogle.com/csp/<unique_id>/<application_version>

    When such a policy is set, modern browsers will execute only those scripts whose nonce attribute matches the value set in the policy header, as well as scripts dynamically added to the page by scripts with the proper nonce. Older browsers, which do not support the CSP3 standard, will ignore the nonce-* and 'strict-dynamic' keywords and fall back to [script-src 'unsafe-inline' https: http:] which will not provide protection against XSS vulnerabilities, but will allow the application to function properly.

    The final Content Security Policy should be run through the CSP Evaluator tool to ensure that no high risk findings are reported.

    More detailed information about Content Security Policy and Strict CSP is available on csp.withgoogle.com/docs.

    Mode of operation

    Every application should be first run in "Report Only" mode. This way during initial development the application can be tested and all CSP violations can be easily spotted.

    After that, when the application is deployed to a live environment, the policy should be switched to the recommended one - "Enforce mode". Applications in that mode will not only report all policy violations but also actively block all dangerous elements and actions.

    Reporting endpoint

    All CSP violations reports from live environments needs to be collected and stored for further analysis. We have created an application that performs this function for you. Please set a following value in your report-uri directive:


    For example a collector URI for the first version of content security policy for gBank would look like this:


    If you develop your application on AppEngine you can use AppEngine Application ID as a unique application identifier.

    Appendix A: Further Reading

    Appendix B: Bad JavaScript patterns

    There are quite a few ways to do the right thing in JavaScript, but also a couple of persistent bad ideas that refuse to go away. This appendix covers common security errors when writing JavaScript, if you avoid these patterns you are well on your way to safe code.

    Part I: How to create security hazards on client-side?

    Bad idea #1:

    Let's use *.innerHTML or document.write to output text.

    Sure, this code is pretty convenient:

    function invalid_id(id) {
      document.getElementById('errorbox').innerHTML = '<blink>Sorry, ' + id + ' is not a valid identifier!</blink>';

    ...but what happens if id is equal to <script>alert('Look ma, my code is running on www.google.com!')</script>?

    The instinctive response of any developer who made this mistake is to add a bit of code to escape angle brackets, quotes, and such. But trust us, you will fail: with hundreds of code snippets like this, somebody will eventually miss a spot. A new engineer on the team will sooner or later assume that id must be an integer, and is already validated elsewhere.

    In fact, it gets worse: there are cases where innerHTML containing user-controlled strings cannot be manipulated safely at all. There are obscure DOM reserialization bugs present in most browsers that may trigger XSS with as little as:

    foo.innerHTML = [...correctly_sanitized_HTML_data_received_from_server...];
    foo.innerHTML += '.';

    Good idea #1:

    If at all possible, do not use innerHTML, outerHTML, cssText - and do not call document.write(), document.writeln(), jQuery's html(), etc - when dealing with user-controlled strings. The approach is simply too error-prone in the long haul. Instead, create and attach HTML nodes using a JS library such as Google Closure.

    Bad idea #2:

    Let's use eval() or <script src=...> to parse JSON / JSONP

    JSON and JSONP are a common way to exchange data in JavaScript environments. To convert the responses received from server-side APIs, it is common to call eval() or create a new <script src=...> element on the page.

    Unfortunately, this is usually a bad idea. Let's say that you are using a third-party API that returns contact information in this JSONP format:

      name: "John Doe",
      phone: "650-555-5555"

    But what if that site gets compromised or is a victim of simple DNS hijacking? Well, it could return something like:

      name: "John Doe",
      phone: alert('Look ma, no hands!')

    Well, you guessed it: the code will execute in the context of your application. In fact, the site does not even need to go openly malicious - what if they just do not validate phone numbers sufficiently, permitting the attacker to smuggle an unescaped quote into the string?

      name: "John Doe",
      phone: "650-555-" + alert('Look ma, no hands!') + "5555"

    JSON escaping mistakes happen, and you should try to limit their impact on your app.

    Good idea #2:

    In general, use sanitizing parsers such as JSON.parse().

    Note that the "safe" JSON validator proposed in RFC 4627 is actually completely broken - do not use it.

    Bad idea #3:

    Let's load this tiny script from Twitter or Facebook...

    This should not be done. Any scripts loaded from third-party sites gain full access to google.com. We do not know how good their security practices are, and we do not want to increase our attack surface in such a way. Keep in mind that even some of the largest and most reputable sites on the Internet have a fair number of serious flaws discovered every year; popularity itself is really not an argument.

    Good idea #3:

    You may be able to implement the functionality in-house, or at least host the scripts on our servers. If that fails, it may be useful to ask the third-party company to develop a safer API for us - there is a good chance they would be willing to help.

    Bad idea #4:

    Let's read application state from location.*

    It is convenient for JavaScript-driven applications to store some of their state information in the URL - using location.hash and history.pushState() APIs in particular. This is not necessarily a bad idea by itself - but it is a very bad idea to blindly trust this data later on.

    Of course, the attacker can put any data within the location.* object simply by redirecting the victim to a carefully crafted URL. Because of this, you need to very carefully scrub the values you are reading back, and never just blindly depend on them to perform state-changing operations, or to render critical parts of your UI.

    You should not assume that characters such as angle brackets or quotes will be always escaped when reading back location.* properties, too. This behavior is browser-specific and varies between various segments - for example, location.search behaves differently than location.hash.

    Good idea #4:

    Use location.* responsibly:

    • Do not rely on location.* to store anything that, if tampered with, could have undesirable or lasting effects.
    • Assume that the data retrieved from location.* will contain arbitrary characters or nonsensical values, and always validate and scrub it properly.
    • In general, do not put anything sensitive in the URLs to avoid leaking user data through Referer headers.

    Bad idea #5:

    Let's set document.domain to google.com.

    At first sight, this may seem like an elegant way to integrate, say, wallet.google.com and picasaweb.google.com: if both sites set their document.domain in this manner, they will be able to exchange data on the client-side with ease. But it is a very, very bad plan: it means that any cross-site scripting vulnerability, anywhere within google.com, can be used to access Wallet, too: all the attacker has to do is to inject scripts onto dancing-hamsters.google.com, and then set their document.domain, too.

    Good idea #5:

    There are several safe alternatives:

    • You can set up secure client-side communications using window.postMessage(). Just be sure to properly set and validate origins on all messages.
    • For legacy browsers, you may be able to leverage Closure XPC.
    • Finally, you can use CORS or plain-old XMLHttpRequest to create a server-side channel to exchange data between the apps.

    Bad idea #6:

    Let's talk to our IFRAME without re-confirming its identity.

    Some of our more complex apps frequently rely on IFRAME-based containers, either visible or hidden, to maintain some of the application's state or to compartmentalize some of the logic. For example, in Google Mail or Google Docs, the currently edited document may be an IFRAME container; while in Google+, frames are used for the chat gadget, for third-party games, and so on.

    Unfortunately, there are some situations in which these frames can be navigated to a new location without the parent's page knowledge or consent. In essence, you need to be prepared that the frame you created a while ago may be no longer what you think it should be.

    Good idea #6:

    If you need to exchange sensitive data with a frame, always use window.postMessage() and necessarily specify the destination origin on outgoing messages, and carefully check it on any incoming ones. If you use any custom frame communications hacks based for example on location.hash, you should be prepared to run into trouble sooner or later.

    Bad idea #7:

    Let's build a security mechanism on top of a browser bug.

    There are several legacy bugs and omissions in DOM access controls; for example, it is possible in some browsers to set window.opener, window.name, or window.on* methods for windows that are not in the same domain.

    Some developers use such mechanism to implement high-performance or more potable client-side IPC schemes that offer some benefits over the standard postMessage API. But doing so is somewhat crazy: the lack of sufficient security controls around these cross-domain interactions means that not only the two consenting parties can leverage it to exchange messages, but that evil.com can join in uninvited, too.

    A cherry on top: browser vendors sometimes decide to fix these problems on a short notice, causing problems for our products.

    Good idea #7:

    Please use the postMessage API, even if it causes a slight performance or portability hit. If you really cannot, reuse an existing standard solution such as Closure XPC. Refrain from developing custom hacks.

    Bad idea #8:

    Let's put user input in document.cookie or in HTTP headers sent with XMLHttpRequest.

    It is tempting to treat document.cookie or XMLHttpRequest.setRequestHeader() as a yet another, convenient way to store or exchange short strings between various portions of your application. Unfortunately, this is dangerous: the handling of 8-bit characters in these settings is not well-defined, and is broken in some browsers, sometimes leading to serious security bugs.

    Good idea #8:

    Use localStorage or server-side mechanisms to keep track of user data. If you cannot avoid it, make sure that the values you place in cookies or other HTTP headers are thoroughly scrubbed and escaped. Be zealous: %-encoding, RFC 2047 encoding, or base64 encoding, should all do the trick.

    Bad idea #9:

    Let's navigate to a user-supplied URL without validating it.

    User-supplied URLs can have a special meaning to the browser. For example, clicking on some of them will not take you to a new page - and instead, will execute scripts in the security context of the originating page. Examples of this include:

    • javascript:alert('Look ma!')
    • vbscript:MsgBox("Look ma!")
    • data:text/html,<script>alert('Look ma!')</script>

    If your JavaScript code uses location.* methods or properties to navigate to externally-supplied URLs without confirming that they are actually safe, you will end up with cross-site scripting flaws. You need to be careful, too because URL parsing is relatively lax and sometimes counterintuitive, trying to ban known bad schemes or detect other patterns is almost always bound to fail. For example, these URLs still work:

    • javas[0x0a][0x0d]cript:alert('Look ma!')
    • javascript://www.example.com/?foo=%0a%0dalert('Look ma!')

    Another pitfall: in Microsoft Internet Explorer, semicolons in URLs also become a deadly weapon - so generating a snippet of HTML containing this markup:

    • <meta http-equiv="Refresh" content="0;URL=user_data">

    ...will lead to code execution if user_data is https://example.com;URL=javascript:alert('Look ma!'), as the second occurrence of URL= in the directive inexplicably takes precedence. Trouble again.

    Good idea #9:

    The right approach is pretty simple:

    • If the URL looks like an absolute reference, make sure that it starts with a specifically permitted protocol followed by a colon and two slashes (e.g. http://, https://),
    • If the URL looks relative, prefix it as appropriate to turn it into an absolute one. Do not output relative URLs as-is - browser URL parsing is messy, the some of them may be interpreted in funny ways.
    • When writing <meta http-equiv="Refresh" content="..."> directives, also reject or percent-encode stray ; characters in the payload.

    Bad idea #10:

    Let's allow third-party applets to talk to our scripts.

    Microsoft Silverlight, Adobe Flash, and Sun Java applets all have opt-in mechanisms that allow them to talk to their host pages: in Flash, this is done with the allowScriptAccess parameter, while Java uses MAYSCRIPT.

    It is dangerous to use these options to permit third-party scripts to interact with your JavaScript code, because they grant the applet full access to your JavaScript context - and by extension, to anything that we happen to have on google.com.

    Good idea #10:

    Settle for server-assisted communication channels if possible.

    Part II: How to misuse JavaScript to induce server-side issues?

    Bad idea #11:

    Let's test all the input on the client-side, and call it a day.

    JavaScript code offers a simple way to validate form submissions and other user-supplied data on the client-side before submitting it to the server. This is faster and more streamlined than waiting for a HTTP request to go through.

    But this also makes it easier to forget about doing proper input validation on the server-side - and makes it harder to spot problems in testing, too. Users are free to tamper with your JavaScript, so always be vigilant: do not assume that only because your JavaScript code rejects < or > in the first_name field, or limits crawling_speed slider to values between 1 and 100, that this is what you will receive from the browser.

    Good idea #11:

    Please use existing server-side input validation frameworks to make sure the data you received from client-side code meets your expectations. Develop unit tests or UI testing procedures to check for the behavior of your server-side code when incorrect inputs are submitted.

    Bad idea #12:

    Let's just generate JSONP with all the user data.

    JSONP makes it easy to load your responses via <script src=...> or similar mechanisms - but it also makes it easy for the owner of evil.com to do the same!

    If your JSON or AJAX response is sensitive and user-specific, expect it will get intercepted unless you take specific steps to prevent this; for example, this response is easy to hijack by simply defining your own add_to_addressbook() function prior to writing <script src="...">:

    add_to_addressbook("John Doe", "johndoe@example.com");
    add_to_addressbook("Jane Doe", "janedoe@example.com");

    Perhaps less obviously, it is also possible to intercept many types of array and object-based syntax by defining setters or getters for object prototypes - this response may be vulnerable in some browsers, too:

      [ "John Doe", "johndoe@example.com" ],
      [ "Jane Doe", "janedoe@example.com" ]

    You do not want evil.com to have a copy of your addressbook, do you?

    In addition to the problems with cross-domain inclusion, there is also a risk of non-HTML cross-site scripting bugs if you do not properly escape certain control characters, specify the wrong Content-Type, and make other seemingly harmless mistakes of this sort. For example, this response will be interpreted as HTML by some browsers, if the stars align just right:

    HTTP/1.0 200 OK
    Content-Type: something/made-up; charset=utf-8
    var error_msg = "Not a valid id: '<html><body><script>alert('Look ma!')</script>'";

    It gets even better - because of obscure charset detection rules, this utf-7 attack may be exploitable, too:

    HTTP/1.0 200 OK
    Content-Type: text/html; charset=made-up
    // Now we are smart and block angle brackets in identifiers!
    var error_msg = "Not a valid id: '+ADw-script+AD4-alert('Look ma!')+ADw-/script+AD4-'";

    Good idea #12:

    Please include the right charset= for any text based document, set the right Content-Type header and thoroughly validate input data conforms to the expected data type. Include the X-Content-Type-Options: nosniff header and avoid hosting dangerous file types (Eg. Java, Silverlight, Flash, MS Word etc) in sensitive domains.

    Finally, be aware of XSSI attacks and how they can be prevented.