Skip to content Skip to navigation

Blog

Megan Erin Miller Posted by Megan Erin Miller on Tuesday, March 17, 2015 - 8:00am

We are excited to announce the upcoming March release of our updated mobile-responsive Drupal themes:

  • Stanford Framework 7.x-3.0 (major version release) *

  • Open Framework 7.x-2.3

* Stanford-branded themes are available by request for official university group and department websites.

How to receive updates

For websites hosted on Stanford Sites: updated themes will be rolled out over March 20-22, 2015. If your website is on Stanford Sites, and is using an earlier version of one of these themes, the updated theme will be enabled automatically.

For websites hosted elsewhere: themes will be available for download March 20, 2015. To download the latest theme versions, visit http://drupalthemes.stanford.edu.

To see demos and request use of a Stanford-branded theme, visit http://drupalthemes.stanford.edu.

Changelog

Below are improvements made to the themes since the last release (September 2014).
 

Stanford Framework 7.x-3.0 (major version release)

New functionality

The newest version of Stanford Framework includes several new theme options that greatly expand functionality of the theme:

  • The ability to choose from seven different color palettes and three font theme options (all fitting the Stanford identity guidelines but allowing for easier customization of the look-and-feel)
  • The ability to select Wilbur and Jordan as theme options (instead of requiring an additional subtheme)
  • A theme option to add a responsive header background image, and sub-options to make the background image fill the entire page background on the homepage for greater impact, and a sub-setting for choosing light or dark text color to contrast with your background image.

Accessibility improvements

  • Fixed empty button element in "hamburger" menu
  • Fixed accessibility and color contrast issues with custom theme option settings combinations

Bug fixing

  • Fixed a myriad of bugs related to layout, theme options, page templates

Style/Cross-browser improvements

  • Fixed IE8 issues
  • Tweaked table, block, pager, image, sidebar, menu, dropdowns, and calendar styles
  • Updated style guide examples in style-guide-examples.html

Theme Settings page improvements

  • Re-organized theme settings page, collapsing fieldsets and adding conditional fields based on style options
  • Added thumbnail images for previewing the custom theme option styles

Code improvements

  • Added allowance for overriding theme settings by refactoring template code
  • Refactored SASS files for managing  theme option styles

 

Open Framework 7.x-2.3

New functionality

  • Added theme option to select Font Awesome 4.3.0 or 3.2.1

Accessibility improvements

  • Fixed empty button element in "hamburger" menu

Bug fixing

  • Added missing body classes for body background image theme option

Style/Cross-browser improvements

  • Moved body related theme options styles from html.tpl.php to template.php

 

Have a question or concern?

Stanford Web Services creates and centrally maintains Stanford’s Drupal themes. If you have any questions, please file a HelpSU request at http://helpsu.stanford.edu/helpsu.cgi?pcat=webdesign, and we will respond as soon as possible.

We're excited to bring these new theme developments and improvements to the community. Thanks to everyone who has submitted ideas and bugs over the past six months. We appreciate your support!

 

Posted in:
Tags:
Cynthia Mijares Posted by Cynthia Mijares on Monday, March 9, 2015 - 11:00am

What happens when the one guy who has access to manage users on your Drupal website leaves? Do you have a team of people that needs to work on the website, but those people change over time? Do they all have a SUNet ID? Take advantage of the Stanford WebAuth Module features and manage access by mapping Stanford workgroups to Drupal roles. 

Learn more about Stanford Workgroups

Set up a Workgroup Role Mapping

  1. Setup Role Mappings.
  2. Verify the WebAuth module is enabled.
  3. From the admin menu bar, navigate to Configuration > WebAuth > Role Mappings
  4. Select the Drupal Role, enter the Workgroup name, then click Add Mapping button.

WebAuth Mappings

Drupal role(s) are assigned automatically to people who log in with WebAuth that are part of an assigned Workgroup.

Posted in:
Shea McKinney Posted by Shea Ross McKinney on Thursday, February 26, 2015 - 11:00am

Drupal comes with a built in search module that provides some pretty basic search options. Like everything else we do, we ask ourselves, "how can this be better?" One of our recent projects has given us the opportunity to evaluate some new options for search. 

Note: This evaluation has a specific case of needs/wants and my notes on them are below. 

Wish list, in order of importance

  1. Facets
  2. Grouping of results by content type
  3. Autocomplete or recommendations
  4. Spellcheck 
  5. Biasing or the ability to order the results outside the natural result set
  6. Search analytics
  7. How easily can I wrap up all the configuration into an installable module or profile?

What is out there? What's Hot?

The first thing to decide on is what engine is going to power the new search. Turns out there are a few options. Through my research I have found out that there are some big differences and specific use cases for each of the following available search options.  I found this resource which does a good job at comparing and illustrating the search options available for Drupal. It is important as a developer to look at the industry to see what is happening in the space you are looking at. Looking beyond the download counter on the module page, and/or seeing which one has active commits is important. If my selection were based purely on those two credentials, Search API DB and Solr would be clear winners. These two, although good options for some, might not hit the needs of your project. What I found was that one particular engine was stirring up some waves. 

Contrib search options for Drupal:

  • Acquia Solr
  • Other Solr
  • Search API with search db 
  • Sphinx
  • Elasticsearch
  • Fuzzy Search 
  • Xapian
  • Sarnia
  • Google Custom Search
  • Custom written extensions of Drupal core search
  • Fake it with views and exposed filters

What are these things?

Search API is a collection of modules that allow site builders to build out advanced full text search solutions for Drupal. Right from the project page itself:

"This module provides a framework for easily creating searches on any entity known to Drupal, using any kind of search engine. For site administrators, it is a great alternative to other search solutions, since it already incorporates faceting support and the ability to use the Views module for displaying search results, filters, etc. Also, with the Apache Solr integration, a high-performance search engine is available for this module.

Developers, on the other hand, will be impressed by the large flexibility and numerous ways of extension the module provides. Hence, the growing number of additional contrib modules, providing additional functionality or helping users customize some aspects of the search process." (Drupal.org project page)

Solr is a super fast, open source, standalone search engine built off of Lucene with Java. Solr needs to be installed and run as its own entity. For a full list of features and information check out Solr's features page. It is one of the most used and developed on search engines in the Drupal space. Drupal.org uses it to power its search as well as the project browsing pages. The Drupal community has been very active around this search solution and has provided several implementation methods as well as a number of contributed modules to make it better. Acquia has provided support and development for a number of modules that use Solr as well as offers a hosted Solr solution. There are a number of hosted solutions out there if you cannot install, or do not want to support, your own Solr instance. For example, if you are hosted on a shared hosting platform you will probably want to go with a hosted solution. 

Search API Database Search is a PHP/Database solution. "It is therefore a cheap and simple alternative to backends like Solr, but can also be a great option for larger sites if you know what you're doing." (Drupal.org project page) This module is built for the Search API module search solution. It provides a much stronger search than the out of the box Drupal core search offerings and can be used on any Drupal website and hosting environment. Because of its underlying technologies this module can perform poorly on highly trafficked and large websites. For large scale websites with lots of content this option could potentially eat up your servers resources and slow down the website. 

Sphinx is for enterprise or massive scale websites.  This search engine powers Craigslist, which claims to have over 300+ million search queries/day.

"Sphinx is an open source full text search server, designed from the ground up with performance, relevance (aka search quality), and integration simplicity in mind. It's written in C++ and works on Linux (RedHat, Ubuntu, etc), Windows, MacOS, Solaris, FreeBSD, and a few other systems." (sphinxsearch.com)

Elasticsearch looks to be the new hotness. This is the product that is making waves in the search community. So big are the waves a Solr hosting service has taken the time to address it. It is fast, feature rich, has a sexy UI, and comes with a number of extra tools the provide valuable information and functionality. Like Solr, Elasticsearch is a standalone search that needs to be installed on its own and have Drupal's modules use it. There are also a number of hosted solutions available. This search engine was built from the ground up with the cloud in mind. 

"Elasticsearch is a flexible and powerful open source, distributed, real-time search and analytics engine. Architected from the ground up for use in distributed environments where reliability and scalability are must haves. Elasticsearch gives you the ability to move easily beyond simple full-text search." (elasticsearch.org)

The extra tools that come with Elasticsearch are:

  1. Logstash, a time based event data logger.
  2. Kibana, a data visualization tool that gives you dashboards of information in real time about the data being indexed.
  3. Marvel, a deployment and cluster management tool that provides historical and real time information about your Elasticsearch servers.

Fuzzy Search is similar to Search API Database Search in that it is also a PHP/Database solution. It can be installed on to any Drupal website and integrates with Search API. 

Fuzzy matching is implemented by using ngrams. Each word in a node is split into 3 (default) letter lengths, so 'apple' gets indexed with 3 smaller strings 'app', 'ppl', 'ple'. The effect of this is that as long as your search matches X percentage (administerable in the admin settings) of the word the node will be pulled up in the results.

Although, a likely candidate to rival the Search API Database Search this project looks to be stale. There is no stable release for Drupal 7, the last commit was in 2013, and the module status is 'seeking new maintainer'.  It does have a decent install base with over 2100 websites, but the lack of development is discouraging.

Xapian "is an Open Source Search Engine Library, released under the GPL. It's written in C++, with bindings to allow use from Perl, Python, PHP, Java, Tcl, C#, Ruby, Lua, Erlang and Node.js."  (http://xapian.org/) Xapian's strength looks to be in document indexing; Specifically large file size documents. **Full disclosure** I did not get around to testing but here is a link to a video where Simon Lindsay talks about the project. You can see his part at the 11:50 minute mark.

Xapian is a highly adaptable toolkit which allows developers to easily add advanced indexing and search facilities to their own applications. It supports the Probabilistic Information Retrieval model and also supports a rich set of boolean query operators.
 
Sarnia "allows a Drupal site to interact with and display data from Solr cores with arbitrary schemas, mainly by building views. This is useful for Solr cores that index large, external (ie, non-Drupal) datasets that either aren't practical to store in Drupal or that are already indexed in Solr." (Drupal.org project page)
 
This looks really cool but was outside the scope for my testing. I would love to hear more about it.
 

Google Custom Search is different from all of the above search options as it is an embedded search engine that you get from Google. It uses a crawler and sitemap.xml data to crawl through your website to provide Google like searching of your website. The downside to this option is that it does not provide the type of configuration of what to index and how to display results that I want. It is a great option for a quick and easy search solution.

Custom coding is always an option if you have the expertise and time available. However, Drupal is open source software with many viable search options. It would be silly to not pick a project that has already been started to use or build upon.

Faking it with views exposed filters is a fast, cheap, and "not a real search but sometimes good enough" (Shea McKinney) solution. If you are looking for exact keyword matching or simple filtering this may be a less resource intensive option. Views exposed filters should not be viewed as a complete search option. 

Quick elimination

Now that I know what the playing field is it is time to make the first round of cuts. Here are some quick notes on the decisions why I chose to remove a few options for the list.

  • There are many contrib modules that provide the added functionality we want and it would be far more effort to write our own. Going fully custom won't be needed here.
  • Views with exposed filters would allow a lot of control over display of results but are field based and cause problems quickly when there are multiple content types in play.
  • Google custom search or other 3rd party crawlers do not provide enough control over display, don't support facets, and can only index publicly available content.
  • Xapian looks promising but is not as feature rich as other options and requires PHP libraries to be installed on the server.
  • Sarnia looks interesting and is built on Search API and Solr but is best used for large amounts of external Solr data and is probably more than we need.
  • Sphinx is very fast because it uses real-time indexes. Its best use case would be for a site has hundreds to thousands of new entities created an hour and needs to have content instantly searchable. Our typical use case would not have this volume of new content on a regular basis. 
  • Local Solr setup is not an option as we do not have the resources to set up and maintain a Solr search server in our environment.
  • The Fuzzy search project is looking for a new maintainer and although this project could be a good opportunity to pick up and help a contrib module there are other more interesting projects out there.

A closer look

After reviewing all of the options above it looks like there are a few really good choices. It was time to put them through their paces. To test, I decided to install our base distribution which comes with a number of contributed modules and a few content types, and configure from scratch 1 index for all content types, inclusion of 6 different field types, author and taxonomy relationships, search field biasing, facets, and autocomplete. From there I generated roughly 200 taxonomy terms and attached them to roughly 7000 nodes of varying content types. I ran indexing immediately and selected a few nodes for my target searches. I searched for those nodes on multiple fields using multiple keywords and compared the results to my arbitrary values of relavancy. 

Below is a feature breakdown table.

Name Cost Features Cons
Search API FREE, as in beer.
  • Autocomplete (contrib)
  • Search live results (contrib)
  • Saved searches (contrib)
  • Range searches (contrib)
  • Search sorting (contrib)
  • Location searching (contrib)
  • Search Pages or Search Views (contrib)
  • Search statistics (contrib)
  • Multiple search indexes
  • Integrates with views
  • Index entities immediately or on cron
  • Multiple index searching
  • Add on modules to the Search API module have the feel of being buggy. This is a shame since the Search API module itself is exceptionally well maintained.
  • Spellcheck project is 4 years old with no commits.
Search API + Search Database FREE, as in beer.
  • All of the Search API features and...
  • Install anywhere you have Drupal installed
  • Result biasing
  • Search facets (contrib)
  • Portable / Migratable
  • Less accurate and powerful than Solr or Elasticsearch
  • Needs Apache Tika installed to index files
Search API + Externally hosted Solr

Cheapest: $10.00/Month

Most expensive: Thousands of dollars a month

  • All of the Search API features and...
  • Fast searching
  • Result biasing
  • Search facets (contrib)
  • Portable / Migrateable
  • Can index files
  • Can take some time to index many and/or large documents
Search API + Externall hosted Elasticsearch

Anywhere from $37.00/month 

to thousands of dollars a month

  • All of the Search API features and...
  • Fast searching and indexing
  • Search facets (contrib)
  • Multiple transports (Curl, Guzzle, Thrift, Memcached)
  • Bonus software and monitoring tools
  • Can index files
  • During testing I periodically dropped, created, and copied settings around to various different test urls and environments. I had some issues with the database index machine names.

Search API autocomplete vs Search API live results

Search API autocomplete provides an autocomplete search field that displays the keyword or matching keywords the user is typing plus the number of results that keyword would return in a drop down box off of the search field. The live results module does roughly the same except that it displays search results and not keywords in the drop down box. Clicking or selecting a search result from the drop down takes the user directly to that search result page. 

Winner for use on a search page: Search API Autocomplete

Decision

The status of search in Drupal is good. There are a number of powerful and easy to implement options out there. Search API is leading the way. It empowers a site to move past Drupal core search and utilize a full text search option. For us, and our client's needs, we will be looking to build out a graduated option. For the most common use case a search implementation with Search API + Search API Search Database will be sufficient. For those sites that need something more robust a migration path to Search API + Solr will be used. 

Why did we choose Solr over Elasticsearch? 

It was the slimmest of margins that allowed Solr to top Elasticsearch. Looking specifically at our needs, this breakdown will discuss the points we valued as important.

Functionality

Our needs are simple. We want our search engine to provide accurate results, facets, excerpt snippets, and possibly the ability to index raw files. Both Solr and Elasticsearch performed these operations very well.

Performance

Testing on several remote services and standing up local instances of each search appliance, at our scale, with one index, the performance of each was very similar. Elasticsearch out-performed Solr in this area due to indexing waits on the Solr hosts where Elasticsearch was instantaneous. 

Ease of setup and use

As we decided to go with 3rd party hosting options the setup and connection for each option was very similar. Both options have easy to follow configuration options in their respected module. 

3rd party options

There are several options for using 3rd party Solr hosts including Acquia. Generally, Solr hosting has the cheaper options but both scale to the thousands of dollars a month range.

Project activity

Strictly looking at the momentum in the Drupal community it looks like Elasticsearch has come a long way recently. Solr stands out as the most developed for and comes with a number of contributed modules and features. Solr looks to be the more mature project but Elasticsearch is looking to be making great headway. With some great features being developed, keep your eyes on Elasticsearch and it's progress.

Support

With a number of groups on campus already using Solr as their search engine of choice it makes sense to also use Solr. Not only will we be inline with the rest of the campus, but we will also have resources available should we need some extra support.

Resources:

Photo of John Bickar Posted by John Bickar on Friday, February 13, 2015 - 2:00pm

If you're using Behat and the Drupal Extension, you might find the following code snippet helpful if you want to add a step to wait for batch jobs to finish.

If one of your Behat scenarios kicks off a batch job (e.g., a Feeds import), and you want to wait for that batch job to finish before moving on to the next step, add this step definition in your FeatureContext.php file:

  /**
   * Wait until the id="updateprogress" element is gone,
   * or timeout after 3 minutes (180,000 ms).
   *
   * @Given /^I wait for the batch job to finish$/
   */
  public function iWaitForTheBatchJobToFinish() {
    $this->getSession()->wait(180000, 'jQuery("#updateprogress").length === 0');
  }

Then, in your featurename.feature file, you can call this step like so:

When I press the "Save" button
And I wait for the batch job to finish
Then I should see "created"

This will cause the web driver to wait until the batch job is finished (or, more accurately, to wait until there is no longer an id="updateprogress" element on the page), or else timeout after 3 minutes (180000 ms). You can adjust the timeout to whatever you want by changing that 1800000 number. You will have to use the @javascript context in your feature to use this step definition.

(Note that a request to add this step definition to the Behat Drupal Extension has been submitted.)

See The Code

Posted in:
Posted by Joe Knox on Monday, February 2, 2015 - 9:00am

Google Analytics gives you critical insights that help drive innovation and evolution by showing you exactly how people are using your site. Understanding how your site is being used can aid in identifying where improvements can and should be made. This post will cover how to add Google Analytics to your Drupal site.

Once your Google Analytics account is set up, there are three main steps that need to be completed in order to begin tracking your site: 1. Creating a new property for your site, 2. Enabling the Google Analytics module, and 3. Configuring the Google Analytics module.

Create a new property for your site

  1. Log in to your Google Analytics account.
  2. Click the Admin tab.
  3. In the Account column, use the dropdown menu to select the account to which you want to add the property.
  4. In the Property column, click Create new property from the dropdown menu.

    click create new property
     

  5. Select Website.
  6. Enter your Website Name.
  7. Enter your Website URL.
  8. Select an Industry Category and Reporting Time Zone.
  9. Click Get Tracking ID (copy/note the tracking ID for later step).

Enable the Google Analytics module

  1. Log in to the site you wish to track as an administrator.
  2. Navigate to Admin -> Modules.
  3. Enable the Google Analytics module.

    enable google analytics module
     

  4. Click Save configuration.

Configure the Google Analytics module

  1. Navigate to Admin -> Configuration -> System -> Google Analytics.
  2. Enter the Web Property ID (UA-xxxxxxx-yy) from your Google Analytics account into the Web Property ID field.

    enter the web property id

  3. Click Roles.
  4. Check Add to every role except the selected ones, then select the roles that you don’t want to be tracked (i.e., administrator).

    select the roles you don't want to track

  5. Click Users.
  6. Check Tracking on by default, users with opt-in or out of tracking permission can opt out.

    select the user setting

  7. Adjust other settings as necessary.
  8. Click Save configuration.

Your Drupal site is now being tracked by Google Analytics. You can generate detailed statistics about your site's traffic, measure visitor behavior, monitor browser usage and other important things from your Google Analytics account.

But wait, there's more! Adding views and filters

It's possible that other sites may copy your header markup and paste it into their site verbatim as a template. This could include your site's Google Anayltics code and property ID. When this happens, it can result in data being sent to your property in Google Analytics from sites that are not yours, and that are unrelated to your content.

One way to solve this is by creating a View in Google Analytics and adding a Filter to it that only includes data from the hostname that you want to monitor. This is something to consider doing for all new properties after set up, as Views are not retroactive and only affect new data being sent to Google Analytics.

Here's how to do it:

  1. Log in to your Google Analytics account.
  2. Select the Admin tab.
  3. Navigate to the Account and Property to which you want to add the view and filter.
  4. In the View column, click Create new view from the dropdown menu.

    click create new view

  5. Select Website.
  6. Enter the Reporting View Name (i.e., mysite.stanford.edu only).
  7. Select a Reporting Time Zone.
  8. Click Create View.
  9. From the View column, click Filters.

    click filters

  10. Click + New Filter.
  11. Select Create new Filter.
  12. Enter a name for the filter in the Filter Name filter.
  13. Select Custom Filter.
  14. Select Include.
  15. Select Hostname from the Filter Field dropdown.
  16. Enter your site URL in the Filter Pattern field (i.e., mysite\.stanford\.edu).

    adjust the settings for the new filter on the view

  17. Click Save.

The more you know

I hope these steps were helpful! Google Analytics can be a great tool for managing your site and planning improvements based on what your viewers need. Do you have other Google Analytics tricks to share? Share more in the comments below!

Photo of John Bickar Posted by John Bickar on Tuesday, January 20, 2015 - 9:00am

I gave a lightning talk at the Stanford University IT Unconference on October 30th, 2014, titled, "The (Wo)Man and the Machine: Automated Testing, User Stories, and Code Refactoring".

In five minutes, I outline how (and more importantly why) we use Behat to run automated tests to improve user experience and support large-scale code deployment and refactoring.

Video is below.

Slides are here (PDF).

Transcript is here.

Thanks to the Office of Accessible Education for providing the tools for transcribing and captioning this video.

Tags:
Megan Erin Miller Posted by Megan Erin Miller on Friday, January 16, 2015 - 9:00am

Welcome back! It's a new year, and that means you probably have some New Year's Resolutions on a sticky near your computer (I have about ten). If one of the goals on your list to learn Photoshop or how to code, then as a seasoned overachiever – I mean, self-driven, life-long learner – I've got some tips for you on how to rock your resolution.

1. Pick a resource

There are some great resources out there for online curriculum to learn technical skills like software or programming, like Lynda.com (which Stanford now provides for free to faculty, staff, and students!). These kinds of sites, which put learning resources on demand at our fingertips, are only as helpful as you make them. It is too easy to drown yourself in the multitude of online resources and get lost in the learning rabbit hole. What I recommend is choose one resource – a book, an online course, and resource like Lynda (which has tracks of curriculum), and stick to that resource to create more structure for yourself. Let that resource guide you in your learning, and give it a real chance before ditching.

2. Make it real

The key to making self-driven learning work for you is to have a real project or goal in mind that requires you to learn those new skills. Do enough research ahead of time to come up with an idea for a project that interests you that can provide focus for you in your learning. Watching videos or reading tutorials is absolutely ineffective if you do not directly apply it to a project. Don't waste your time. Instead, make your goal fun and something you are passionate about so that you will keep it top of mind.

That said, keep your project goal small enough so that it isn't scary. Don't dream up the next Facebook, come up with an idea like a one-page website Valentines card for your significant other. As you learn your new skill, you will have more ideas about how to expand your project goal and make it more interesting, so start off with a simple idea and let it grow.

3. Create accountability

There's nothing like creating accountability when it comes to reaching your self-driven learning goal. This is actually what real school does, but you can create this accountability for yourself! Here are some ideas:

  • Find a buddy: An accountability partner can be someone else committed to a goal they have. Set a regular day each week that you email each other an update on what you did that week, and what you plan to do the next week. It's a way to build your friendship and become partners in reaching your goals.
  • Blog your adventure: Create a learning adventure blog where you publish each week a post of something you made that week, and what you learned. Public accountability and a nice record of what you did to show your progress.
  • Put it in the Calendar: That's right. Pretend it's a real class, and put it on your calendar and PROTECT YOUR TIME! Make sure your colleagues and family/friends understand that this time belongs to your learning goals.
  • Report back: If your skill-building is related to work, even if you are doing it as a side project, create public accountability by telling your team what you are doing, and setting a lunch time with them aside to do a little presentation at the end to share what you learned. Everybody benefits.

4. Set intentions, not goals

Lastly, be kind to yourself. For self-driven learning, it is often better to set intentions rather than goals. You can't actually know what the end result is going to be, because learning is a process, and you'll discover that along the way. So instead of setting a goal like, "Learn Photoshop," or "Build Fakeblock," set an intention.

"I intend to spend 1 hour each week on learning Photoshop."

"I intend to spend three months learning CSS."

What's great about this is that every week you stay committed to your intention, you get positive reinforcement to continue, instead of at week two saying to yourself, "Man, I still don't know Photoshop! It's going to take forever. I'm not reaching my goal."

Conclusion

I hope these tips have helped you make your self-driven learning goal a little more achievable. What are some of your New Year's Resolutions, and what has worked for you in the past when setting self-directed learning goals?

Posted in:
Megan Erin Miller Posted by Megan Erin Miller on Friday, December 19, 2014 - 8:15am

2014 has been a blast. We are so grateful to get to work with so many talented and dedicated colleagues around the university (and beyond!). We wish you happy holidays over the winter break, and we'll see you in 2015!

SWS team photo

Posted in:
a face for radio Posted by Zach Chandler on Friday, December 12, 2014 - 6:55am

All forward-thinking technologies share one attribute: the original designers intentionally build in opportunities for future users to innovate. It requires humility and a belief in the creativity of others. This is true for buildings, computers, networks, and other tools.

As we design systems in Drupal, we should try to imagine how its flexibility can reward future sitebuilders, and allow for innovations that the principal designers cannot imagine themselves. This post is part of my answer to the perennial Why Drupal? question, but it’s also deeper than that, and gets at an inherent tension in web design that I’d like to examine carefully as SWS grows its product line. Drupal is a flexible tool-building-tool. Jumpstart is a purpose-built product. How do these attributes co-exist?

Room to Tinker

In October of 1990, having recently convinced CERN that a world wide web would be a good idea, Tim Berners-Lee sat down to write his first web client.  He had working software about a month later (!) due in no small part to the affordances of the computer he was using — a NeXT machine. As it turned out, the fact that Sir Tim was using a NeXT mattered. A lot.

“I still had to find a way to turn text into hypertext, though. This required being able to distinguish text that was a link from text that wasn’t. I delved into the files that defined the internal workings of the text editor, and happily found a spare thirty-two-bit piece of memory, which the developers of NeXT had graciously left open for tinkerers like me. I was able to use the spare space as a pointer from each span of text to the address for any hypertext link. With this, hypertext was easy.” [1]

In 1990, the emergence of the Web was not a foregone conclusion, in fact, this was a KM side project, a curiosity on which the CERN leadership gambled, and they gave Tim a couple of FTE for six months. That’s it. What if he had struggled with development because he had inflexible tools and had little to show when the six months was up? Would we even have the Web that we know today?

Learning from physical spaces

Leave room to evolve is an imperative that calls out from the landmark book Make Space, by Scott Doorley and Scott Witthoft.

“Allow the space and the people to continue to adapt and grow. Do less. leave some aspects of the space open-ended, even though your impulse might be to take care of every detail. … Open space provides a buffer for identifying, absorbing, and responding to unanticipated needs” [2]

Of course  most of the digital spaces that we build do not start with the same brand of high-flying creativity that bursts from the seams of the d.school (When was the last time your web project called for  9-foot grain thresher?). Nevertheless, there is a lot that we can learn from the ubercreative plaid-wearing set from Building 550. Sure, we iterate. We’re all agile and whatnot. Our projects have version numbers and roadmaps, and hell we even listen to real people (gasp!) and think about what they need before we make stuff for them. But what about identifying, absorbing, and responding to unanticipated needs? Evolving needs? Do we as web designers, have a line on how people inhabit the digital spaces we build, after the work is done? What are the hacks and workarounds that users employ to make our tools do what they actually need (c.f. desire paths), even the things that they didn’t know they needed when we asked them during discovery?

This is exactly the question that Stewart Brand explores in his book, How Buildings Learn: What Happens After They Are Built. Brand shows how people modify the places that they live and work, that the best buildings learn over time by the inhabitants hacking their own solutions to problems. Vernacular Architecture is the term for a transmitted popular culture, an eminently practical structural dialect that emerges in a given place, based on local needs.[3]  What distinguishes it from the high art of professional architects is that it follows the advice of the d.school above:

“Vernacular buildings evolve. As generations of new buildings imitate the best of mature buildings, they increase in sophistication while retaining simplicity.”

Can we build an Adaptive Architecture for the Web?

(Here I mean “adaptive” in Stewart Brand’s sense of creating room to tinker, not RWD.)

The good news is that Drupal is awesome at adaptation and scaffolding future growth. Stanford University is a mind-bendingly expansive enterprise, and we have chosen a tool that can expand as our needs on the web evolve.

For this reason Drupal is chosen over and over again by large, evolving organizations. It’s a domain modeling framework and a toolkit that can start simple, but expand to meet complexity, which is why it’s the tool of choice for many universities, the U.S. government, municipalities, non-profits, and large corporations alike.

Stanford Sites Jumpstart was conceived to harness the power of Drupal, but make it dead simple, even pleasant, to use. We pruned off the rough edges from the Drupal installation process, and gave our users a simplified UX that they could get started with from day one —without having to take a Drupal training class first!

This simplification approach has been wildly successful, and our users have validated this design philosophy. But have we let the pendulum swing too far toward simplicity? What if the needs of our users are complex? Let’s not lose sight of how we got here, how the great visionaries on whose shoulders we stand (like Steve Jobs, and Tim Berners-Lee) left us room to tinker, and to innovate. As we design today’s systems, let’s have a similar amount of faith in our future users.

“Let people do things.”
—Stuart Brand

Complications for Products

Though I am advocating for people to retain the freedom to tinker with the tools we create, this can be dangerous idea if we also expect these tools to also behave like products.

How does the imperative to design for future innovation jibe with platforms and products like Stanford Sites, Jumpstart, Open Atrium, Commons, DKAN, Open Aid, Open Outreach, etc. in which it might matter more that things Just Work™ ?

This is where the parallel with physical buildings breaks down: a builder of homes never hears from a client: “I’m sorry, I accidentally deleted all my walls, can you help me get them back?” If we follow Stewart Brand’s advice, and let people do things, they will inevitably break stuff. This is just going to happen, and is a natural step in the learning process of becoming a site owner, and eventually a Sitebuilder. As designers we have to have some tolerance for learning-by-breaking-things path, and resist the urge to make that impossible, since that choice ramifies toward taking away sitebuilding capacities altogether.

Challenge: the only way to enjoy the benefits of a distribution over time is to use it as intended, and not modify it’s fundamental aspects. Patching or modifying significantly takes you out of the upgrade path, and defeats the purpose of the distro as such. So how do we strike a balance?

I don’t really know the answer, but I do think that SWS is onto something with the Jumpstart product suite.  It’s a simplified user experience, but it’s still Drupal under the hood. We are consciously working on this problem of “leveling up”, which applies both to websites, and to users, as they learn and grow with their system. We work iteratively, we test, and we pay attention to the desire paths that our users convey in the way that they adapt what we have built.

Affordances and Desire Paths

A no parking sign, hastily attached to a pole by a pragmatic tinkerer

The image above depicting a no-parking sign posted on Stanford campus is an example of how users will satisfy their needs expediently, with tools that are available (aka “satisficing” ).  This signage probably isn’t the version that Stanford would officially endorse, but it works, it satisfied a need, and the person that devised this solution spent maybe a couple dollars in materials costs (I count 8 zip ties.)  

Websites powered by CMSes work like this too (at least the good ones do). Drupal is a powerful, flexible toolkit. Even when we package it up as Features and products, it’s still a toolkit underneath, and a natural process in a user’s development as they learn and grow with the tool is to solve their own problems expediently with what’s available, using tools they can reach. And our toolkit includes the equivalent of zip ties —super-handy, seemingly all-purpose tools whose affordances allow the user to apply them in multiple (sometimes surprising) ways.

Certain kinds of  vernacular architecture were innovation-friendly because they communicated their affordances to their inhabitants. “The lesson for the ages from three-aisled structures is that columns articulate space in a way that makes people feel comfortable making and remaking walls and rooms anchored to the columns”[4].   Is there a digital equivalent? What can we do with UX design to help our users see what’s possible, without letting them get into too much trouble? Can we design the user experience so that the tool teaches its users?

Design is human.

Design is about solving problems for people. As designers of tools we always strive to empathize with the people that use the tools we make, and part of that is making complex things simpler, but we shouldn’t sacrifice our users’ ability to innovate with our tools in ways we can’t imagine. There are things they know about what they need that we can’t anticipate. The NeXT engineers left Tim Berners-Lee the 32 bits of memory he needed to write a hypertext editor, which gave birth to the web as we know it. Now it's our turn.

We should always leave a few zip ties within reach.

 


Notes:

  1. Weaving the Web, pp. 28-29
  2. Make Space, pg. 76
  3. I see a direct parallel to Vernacular Architecture in what we commonly call “best practices” in Drupal development, and how that term can refer to slightly differentiated discrete practices in different locales, institutions, verticals, and communities of practice. Darwinian. We are finches.
  4. How Buildings Learn, Chapter 9
Posted by Joe Knox on Tuesday, December 9, 2014 - 8:50am

In this post, I'll share one of my take-aways from reading A Book Apart's Designing for Emotion, highlighting how infusing personality into the things we create can help produce emotionally engaging experiences that make long-lasting impressions on our audience.

Constant inspiration

One of the coolest things about being a member of the Stanford Web Services team is experiencing the inspiration that erupts with every project (and often, with every discussion). We’re always rethinking processes and studying patterns to find new, more awesome ways of doing things.

Many of our exciting ideas are sparked by something we happened across in our lives that we want to share and further explore. Sometimes inspiration is produced from a blog post, a talk, a short video, a quote, or a book.

Recently, our amazingly rad Web Designer, Megan, recommended (and let me borrow) a fantastic short book loaded with inspiration, Designing for Emotionby Aarron Walter. Designing for Emotionemphasizes the impact we have when we reevaluate what it means to create content, design interfaces, and build websites in the context of emotion.

Websites, with personalities

In many circumstances, experience designers research and interview their audience as part of the design process so that they can create a profile of a standard user who embodies a larger group. This is called a persona. Personas aid web teams in staying focused on user needs and help with understanding who the target audience is. But what about understanding who the website is?

One of the concepts that I found really exciting, and that I’d like to share, is creating a design persona for your website much like you would for a user. In a way, giving personality to your website. Personality can manifest itself through site architecture, content construction, page layout, and design. Creating a persona for your site helps define the best ways to channel personality in each of those areas.

Personality is also a big part of your brand, and the personality (of a person or a thing) exhibited deeply sways your audience's decision-making process. In higher education, brand plays an important role in defining the character of an institution or school, and highlights the unique characteristics of the departments and offices within the overarching institutional brand. Therefore, establishing personality can be a powerful tool in building websites, establishing your brand, and connecting with your audience – is the site trustworthy? Likeable? Easy to get along with? Does it serve its purpose? Do the benefits of forming a lasting relationship with it outweigh the costs?

Ask yourself: if your website was a person, who would it be?

All of this sounds cool, right? But where do you start? Below is a framework of what to include when creating a design persona for your website as outlined in Designing for Emotion.

What to include in your website persona

Brand Name: The name of your site, service, or company.

Overview: A short overview of the personality of your service. What makes your service or brand personality different?

Personality Image: An image of a person that embodies many of the traits you wish to include in your service or brand. This will help make the personality less abstract. Pick a famous person, or a person with whom your team is familiar. If your brand has a mascot or representative that already embodies the personality, use that instead.

Brand Traits: List five to seven traits that best describe your service along with a trait that you want to avoid. This will help those who are designing and writing for this design persona to create a consistent personality while avoiding the traits that would take your service or brand in the wrong direction.

Personality Map: Personalities can be mapped on an X and Y-axis. The X-axis indicates the degree to which the personality is unfriendly or friendly. The Y-axis shows the degree of submissiveness or dominance.

Voice: If your service or brand could talk, how would they speak? What sorts of things would they say? Would they speak with a folksy vernacular or a refined, erudite clip? Describe the specific aspects of your brands voice, and how it might change in various communication situations. People change their language and tone to fit the situation, and so should the voice of your service or brand.

Copy Examples: Provide examples of the type of copy that might be used in different situations in your interface. This will help writers quickly get a sense for how your design persona should communicate.

Visual Lexicon (optional): If you are a designer creating this document for yourself and/or a design team, you can include in your design persona a visual lexicon providing an overview of the colors, typography, and visual style that will best convey the personality of your service or brand visually.

Engagement Methods: Describe the types of emotional engagement methods you might use in your interface to support the design persona, and create a memorable experience.

You can check out and download Aarron Walter's design persona template (with an example) at http://aarronwalter.com/design-personas.

Words to guide us

Here’s one of my favorite related excerpts from the book:

We’re not just designing pages. We’re designing human experiences. Like the visionaries of the Arts and Crafts movement, we know that preserving the human touch and showing ourselves in our work isn’t optional: it’s essential.

So, what about you? What have you been inspired by lately? Did you stumble upon an inspirational quote? Happen across a motivating article? Read an exciting book? Share with us in the comments.

Posted in:

Pages

Subscribe to Stanford Web Services Blog