Access Control Headers For The WordPress REST API

Sending headers, including cross-origin (CORS) headers has changed a bit in version 2 of the WordPress REST API. Access control headers are sent by the function rest_send_cors_headers(), which is hooked to rest_pre_serve_request. You can easily change the headers by unhooking that function, and adding your own.

Below are some examples using access control headers, but really any type of header could be added here. That said, keep in mind that the class WP_REST_Response, which should be used for all responses, also gives you the ability to add headers. Any headers unique to a request should be set there.

 * Use * for origin
add_action( 'rest_api_init', function() {
	remove_filter( 'rest_pre_serve_request', 'rest_send_cors_headers' );
	add_filter( 'rest_pre_serve_request', function( $value ) {
		header( 'Access-Control-Allow-Origin: *' );
		header( 'Access-Control-Allow-Methods: POST, GET, OPTIONS, PUT, DELETE' );
		header( 'Access-Control-Allow-Credentials: true' );

		return $value;
}, 15 );

 * Only allow GET requests
add_action( 'rest_api_init', function() {
	remove_filter( 'rest_pre_serve_request', 'rest_send_cors_headers' );
	add_filter( 'rest_pre_serve_request', function( $value ) {
		$origin = get_http_origin();
		if ( $origin ) {
			header( 'Access-Control-Allow-Origin: ' . esc_url_raw( $origin ) );
		header( 'Access-Control-Allow-Origin: ' . esc_url_raw( site_url() ) );
		header( 'Access-Control-Allow-Methods: GET' );

		return $value;
}, 15 );

 * Only allow same origin
add_action( 'rest_api_init', function() {

	remove_filter( 'rest_pre_serve_request', 'rest_send_cors_headers' );
	add_filter( 'rest_pre_serve_request', function( $value ) {
		header( 'Access-Control-Allow-Origin: ' . esc_url_raw( site_url() ) );
		header( 'Access-Control-Allow-Methods: POST, GET, OPTIONS, PUT, DELETE' );
		header( 'Access-Control-Allow-Credentials: true' );

		return $value;
}, 15 );

 * Only from certain origins
add_action( 'rest_api_init', function() {

	remove_filter( 'rest_pre_serve_request', 'rest_send_cors_headers' );
	add_filter( 'rest_pre_serve_request', function( $value ) {

		$origin = get_http_origin();
		if ( $origin && in_array( $origin, array(
				//define some origins!
			) ) ) {
			header( 'Access-Control-Allow-Origin: ' . esc_url_raw( $origin ) );
			header( 'Access-Control-Allow-Methods: POST, GET, OPTIONS, PUT, DELETE' );
			header( 'Access-Control-Allow-Credentials: true' );

		return $value;
}, 15 );

I have a plugin for setting CORS for GET requests and a plugin for setting CORS for ALL requests on Github.

Read The Source Luke

There was a time when I really wanted to work for 10up, not to be confused with the time they wanted me to work for them. Before I interviewed with their president and founder Jake Goldman, I did as much research on him as I could on him.

That’s a good pre-job interviewing strategy. It didn’t work for me then, but I learned some really useful stuff from reading up on Jake and looking through his WordCamp presentations.

My point here is not to present some theory on getting jobs that is likely full of shit. I hear 10up is hiring and if you want to work for them, awesome. I don’t have any advice besides contribute to open source projects as much as possible. Actually, that’s my advice on getting any job in the WordPress ecosystem.

One of the best pieces of advice I got from reading up on Jake was that when you don’t know how to do something, you should read the source code, before you go to the codex. This is one of the most useful bits of advice I’ve ever gotten and I repeat it all the time.

Why You Must Use The Source

Search your feelings, you know it to be true!

Read The Source Luke memeSince you do all of your development locally — you do right? — you probably have  like 14 copies of WordPress on your computer. WordPress is super well documented inline and the code is designed to be readable. There is a lot of abstraction that could be done in WordPress core that would come at the expense of readability that I am happy has not been done.

I’m not here to knock the Codex, or the code reference at They are both great resources, and I use them.

What you can only get from reading the source is an understanding of how something works. The documentation is a reflection of what it does, or should do. The source, along with a quality inline debugger tells you what is happening — you are using xdebug right?  And along the way, you will learn a ton about development patterns by reading the source.

It also means that if your usage of the function/ method/ hook, etc. doesn’t work as intended, you are in the exact right spot to start asking why. You can see what should be happening and then start debugging to see what is happening.

Most importantly, documentation doesn’t always exist. I wrote an article recently for Torque on adding custom endpoints and routes to the WordPress REST API. At the time it was published, I’m pretty sure my article was the only documentation on how to do it.

The lack of documentation wasn’t a problem for me. I just read the source for the default post control. You can’t allow yourself to go through life being rendered powerless by a lack of documentation.

It’s A Free Software Ya’ll

freedom[1] = The freedom to study how the program works, and change it so it does your computing as you wish.

If you want to be a creative non-fiction writer, you’re going to need to not just read, but critically study Truman Capote, June Didion, James Baldwin, Hunter S. Thompson and others top non-fiction writers. That’s a huge part of learning to write in that genre that you can’t skip.

Being a developer is not different. Go grab a well written plugin, and figure out how it works. Along the way you will learn more about how to work with that plugin than you will be reading documentation, no matter how well documented it is.

Pull up the file in WordPress with the API you use the most and trace how it executes the code your throwing at it. Next time something goes wrong with that API, you will know exactly where to start your search for the cause of your issue.

It’s a free software, don’t forget to use your freedom to learn how the program works and to document your code to help others learn from what you do:

Composer For WordPress Plugin Development On WPSessions

WPSessions Icon For The TalkToday I did a WPSessions on using Composer in WordPress plugin development. I’m super honored that Brian allowed me to do this, and was really happy to share what I think is a really important tool.

If you missed the live event, you can always purchase the recording from WPSessions.

Below, you can see my slides from the event, which include a lot of helpful links for Composer. You should probably checkout the links in the post for my WordCamp Atlanta talk on Composer, which was a more basic talk, with a broader scope.

__get() post_meta from a WP_Post Object

Ok, maybe I’m the last on to arrive at the party on this, but I somehow never realized that you can get any post_meta field by using its key as a property of a WP_Post object. Pretty useful. It means that if you have a WP_Post object, and the post it represents has a meta_key called “josh,” you don’t need to use get_post_meta(), you can use  $post->josh.

Here are some examples:

//Add meta fields
update_post_meta( 1, 'hats', 'bats' );
update_post_meta( 1, 'bread', array( 'rye', 'sourdough' ) );
//get the WP_Post object
$post = get_post( 1 );
//use it's __get to get its meta fields.
echo $post->hats; // "bats"
print_r( $post->bread ); //the array
var_dump( $post->not_a_real_key ); //empty string
view raw __get-meta-field.php hosted with ❤ by GitHub

How does it work? It works thanks to the way the __get() magic method is used in the WP_Post class. If a class has this magic method, it will run anytime a property that doesn’t actually exist is called. In WP_Post it is used for a few fields that are not in the post table, and therefore are not retrieved by the main query for the post.

As always, taking a look at the source, and then trying it out, is the best way to learn. Here is the __get() from WP_Post, copied from the source for your convenience:

//how it works in core
//Copied from post.php ~L718 in WordPress 4.3 (much GPL)
* Getter.
* @param string $key Key to get.
* @return mixed
public function __get( $key ) {
if ( 'page_template' == $key && $this->__isset( $key ) ) {
return get_post_meta( $this->ID, '_wp_page_template', true );
if ( 'post_category' == $key ) {
if ( is_object_in_taxonomy( $this->post_type, 'category' ) )
$terms = get_the_terms( $this, 'category' );
if ( empty( $terms ) )
return array();
return wp_list_pluck( $terms, 'term_id' );
if ( 'tags_input' == $key ) {
if ( is_object_in_taxonomy( $this->post_type, 'post_tag' ) )
$terms = get_the_terms( $this, 'post_tag' );
if ( empty( $terms ) )
return array();
return wp_list_pluck( $terms, 'name' );
// Rest of the values need filtering.
if ( 'ancestors' == $key )
$value = get_post_ancestors( $this );
$value = get_post_meta( $this->ID, $key, true );
if ( $this->filter )
$value = sanitize_post_field( $key, $value, $this->ID, $this->filter );
return $value;
view raw wp-post-__get.php hosted with ❤ by GitHub

Why am I thinking about this? In Caldera Forms we have the ability to auto-populate select fields from a specific post type, or taxonomy. I’m working on adding an autocomplete/ select2 field type that will be able to use this auto-population option.

As a result, I’m looking at how to make them more useful and flexible. My first step is adding filters for what fields of the WP_Post object, or the stdClass object for a taxonomy term is used for the option’s value and label. If you look at this commit, where I added those fields, you can see how much more flexible it is for posts, then for terms.

Why? Because of the WP_Post’s magic __get() you can use this filter to use meta_fields, or even the post’s category as the option label or value. Since get_terms() returns a stdClass object, not a specific WordPress object, with this kind of flexibility in mind, no such luck there. Those using these filters will be limited to using regular taxonomy term fields.

Use It In Your Own Classes

Studying how the __get() is used in WP_Post shows you a fun way to use a WP_Post object. But it also, should give you ideas on how to use it in your own work.

What’s cool about __get() is that it only runs when needed. In WP_Post, the basic post fields are queried for every time. But the extra fields available through the __get() method, which are not just limited to meta fields are only queried for when asked for specifically. It’s smart class architecture that we can all learn from when writing classes for doing specific queries or API calls that in some, but not all cases, you might need additional data.

Introduction To AJAX In WordPress

This weekend I am giving a introduction to AJAX in WordPress talk at WordCamp Miami. AJAX is one of the most important tools available to us as WordPress developers. It allows us to create more dynamic and usable sites. The less page loads, and the more interactive a site is, the better the end-user experience will be.

Understanding AJAX and the WordPress REST API is a key step to building apps with WordPress. My talk covers the basic patterns for using jQuery AJAX, in WordPress as well as the technology involved.

 Slides & Example Code

GitHub LogoExample code can be downloaded at:


Next Steps

This talk covers only the most basic patterns and important concepts for using AJAX in WordPress. To take what you’ve learned farther, I recommend the following links:

Fun With array_column()

  • array_column() is only available in PHP 5.5 or later. 😛
  • Figured this out for the example code demonstrating some cool new features in Caldera Forms 1.2
  • This is a little extra-complex due to my need to find the array key, as the function returns the (numeric) index.
  • I need a syntax highlighter, I know.
$a = array(
	'x' => array(
		'a' => 7,
		'b' => 8,
	'y' => array(
		'a' => 2,
		'b' => 9,
	'z' => array(
		'a' => 55,
		'b' => 'hats'

//demonstrate how array_column works
var_dump( array_column( $a, 'b' ) ); //returns an array of all 'b' keys in array $a

//get the index (numeric) of array in $a with value of 1 for index 'b'
$index = array_search( 9, array_column( $a, 'b' ) ); // returns one, the (numeric) index of the array we want

//get numberic indexes of array
$indexes = array_keys( $a ); 

//get key for the index we want
$key = $indexes[ $index ]; //returns 'y'

var_dump( $key );

Gain WordPress Development Superpowers With Composer

This weekend I am super-honored to be presenting at WordCamp Atlanta 2015 on Composer, the PHP dependency manger. This presentation, “Using Composer To Increase Your WordPress Development Powers,” is adapted from an earlier post on this site of the same title.

If you are new to Composer, I recommend that you read my Torque posts on it. So far I have written an introduction to using Composer with WordPress and a guide to improving WordPress plugin development with Composer. I also recommend reading through Rarst’s Composer for WordPress resources.

The video from the talk is now available from WordPress TV:

My slides for the presentation are below:

Changing File Names In A Git Repository Without Loosing File History

Git tracks files by where they are in the directory structure. This creates unintended commits and losses of file history when renaming directories. There are two good solutions I’ve found for these issues, depending on the circumstances.

Change A Directory’s Name

Turns out Git has its own mv command just like a unix-like OS does. So, to change the name of a directory from frog to barista, it is as simple as:

git mv frog barista

Source, and some excellent alternative solutions here:

Rewrite Full Directory History For All Files

This is method is really drastic, as it makes it rewrites the history of the repo. It is as if the files were always there, and the move never happened. I recently did this when I wanted to use one repository as the starting point for another, but needed to move everything in the old repository down one level into a subdirectory.

In this example, all files are moved into a directory called “root”. You can use slashes to move down more levels.

git filter-branch --prune-empty --tree-filter '

if [[ ! -e root ]]; then
    mkdir -p root
    git ls-tree --name-only $GIT_COMMIT | xargs -I files mv files root


Using Dropbox To Keep VVV In Sync on Multiple Computers

Updated November 3, 2014 See below for a few issues we’ve come across.

I’ve been plotting for a while now to get a kick-ass desktop for development. Since I work once or twice a week at a co-working space and travel for WordCamps or to visit family fairly regularly, I’m going to need to keep my laptop for those situations. One of the things, besides clients owing me money that has kept me from getting said kick-ass desktop machine  is worrying about how to keep my development environment in sync between the two machines.

Lucky for me Scott Kingsley Clark, figured out how easy it is to keep VVV in sync between his shiny new iMac and his laptop. Scott was kind enough to share his strategy with me, which I have tested with a loaner computer and found to work very well.

Before We Get Started

I’m assuming that you’re already using VVV and are familiar with it. If you’re not, you should probably be reading my guide to getting started with VVV for local WordPress development instead.

Since you’re familiar with VVV, it’s a safe assumption that you know to install Vagrant itself and a Virtual Box or some other VM software on both computers. Right?

You can either use an existing VVV setup or create a new one for this. In this guide, I will be starting from scratch, but you could also do the same thing by temporarily moving your existing one into your Dropbox folder instead of creating a new one.

Also, for this guide, I will be calling the vagrant folder dvv, for Dropbox Varying Vagrants. You can call it whatever you want.

Setting It Up

Install VVV In Dropbox

The first step is to clone VVV itself into Dropbox:
cd ~/dropbox
git clone dvv
You could also download the ZIP and extract it in Dropbox.

Symlink VVV Folder

On both computers you are going to want to symlink the VVV folder with a folder in your user root. This is mildly optional, as you could just work out of Dropbox. Personally, I agree with Scott on symlinking, as I love the ease of being able to cd directly into my VVV from a new bash shell. It’s a little thing, but I have to do it after every restart.

If you’re using an existing install, this step is extra important as it let’s you put the install back where you found it on the originating computer.

You must do the symlink on both computers:
ln -s ~/dropbox/dvv ~/dvv

Vagrant Up

On the remote machine you do a new vagrant provision and that’s it you’re good to go.

What This Doesn’t Do

This does not keep the virtual machines themselves in sync. I don’t think it makes any sense to do so, though I’m sure it’s possible. That means whenever you make changes to your configuration or add a new site, you will need to do a new vagrant up on the other computer.

Also, since the database is in the virtual machine. It does not keep the database in sync. If you have the Vagrant Triggers plugin installed you get a database backup every time you vagrant halt. You can use that to rebuild the DB when doing a new vagrant up or vagrant provision.

That’s Actually Very Simple

That’s it. Turns out this is very simple,  Scott’s pretty good at creating ways of making WordPress simpler.

OK, Maybe Not That Simple

Here are some caveats that Scott has discovered since starting to use this strategy since implementing Dropbox to sync his VVV between two computers:

1. My virtual machine tends to be recreated from time to time, forcing 100% install over again when doing a `vagrant up`, this is likely because some files synced by Dropbox are unique to the computer, and keep changing between ‘up’s on the different vagrants.

2. Because of the virtual machine recreation, DB changes can disappear and most commonly be restored by Vagrant during it’s provisioning. It’s important to note that when you use `vagrant halt`, it will backup the databases on the current virtual machine, and on `vagrant up` (first, or `vagrant provision`) it will attempt to restore those DB .sql files on the other machine.

Approach with caution, I currently now believe the best way to sync VVV is to limit the sync to the `www` folder, not the entire VVV folder contents.