Skip to content Skip to navigation

Shea Ross Mckinney's Blog Posts

Shea McKinney Posted by Shea Ross McKinney on Monday, April 11, 2016 - 10:20am

There are times where I curse open source software but those times are far outweighed by the times that I am reminded why I love it so much.

Normally my blog posts are on the technical side because that is where I feel safe and comfortable writing in a public space but I felt compelled, okay, urged by the boss and team, to write about a moment that I gushed about during a recent staff meeting.

The week started off as a bug fix week, a sprint to tackle long-standing or annoying bugs that are in our sites and products. One peculiar bug was on our Subsite Feature: when an anonymous user first landed on the subsite node after a cache clear, they would be presented with the default (wrong) theme instead of an alternate theme. Before I even began, one of my workmates (John Bickar) pointed me to a blog article that clearly outlined the problem I was seeing. This was a very timely article as it was fresh off the press. Looking in to the issue I confirmed that, indeed, the issue described in the article was what I was seeing.

This spurred a major refactor in to how the module worked, but that is beside the point. If it wasn't for the fact that someone else wrote an article about the problem and solution, and my workmate hadn't passed it along, I would have spent many more hours in my debugger trying to figure out the cause of the issue. Yay open source!

Once the module was refactored to get around the issue with using hook_custom_theme() and loading entities, I passed off the final code to my other workmate, Greg Garvey, who did the majority of the heavy lifting of the refactor, to test to see if the solution was complete.

His testing came back negative; the issue was still happening in his environment.

Back and forth we went trying to figure out why he was still seeing the issue and why I was not. Through a bit of chance, we found out that another contributed module on his environment (subpathauto) had the same issue in it. Boo open source! Having already dealt with the issue once, I took it upon myself to fix the issue in the contributed module and posted a patch, which I hope will be adopted soon. Yay open source!

It may have been an unlikely path in which the patch came back to the subpathauto project but it really highlights the awesomeness that is open source and the Drupal community. Someone with the exact problem I faced took the time to write about it. Someone in my working group shared that information with me, and when I found the issue in another project I was able to contribute my knowledge back to it. I find times like these powerful and what will give Drupal the long term adoption we hope for.

Posted in:
Shea McKinney Posted by Shea Ross McKinney on Wednesday, December 9, 2015 - 9:14am

Not all modules belong on Drupal.org

Over time Stanford Web Services has built a large library of custom modules, features, and themes. Some of the work made its way back to Drupal.org, but some of it is specific to Stanford or higher education types. You can find most of this work on our GitHub organization page as we like to be open source. Not only do we share our code with the Stanford community...we share with the whole Drupal community. Anyone and everyone can download and use the things we build. 

Sometimes the things we build have bugs and need to be updated. A problem with our features and custom modules is that they don't have an update status like those available from Drupal.org. Some folks have asked us "Why don't you run a features server" to support update status? The reason is that we like to work with GitHub, and it provides us and our developers many tools that a features server does not have. The good news is that GitHub has an API and a release system. This allowed us to create a module to get update status information from GitHub into Drupal.

We called it ERUS

You can find ERUS on the project page of Drupal.org or on our GitHub account. ERUS stands for External Repository Update Status. With this module, and a line in your module's .info file, you can get information about release status for your custom modules and themes right in Drupal. This has proven valuable for us as more and more teams across our campus adopt and use our shared work. We don't know where our work will end up next and cannot expect those who install our modules to follow our GitHub pages. Using this module, they can get the update information they need right in their website.

How it works

  1. Pick one of your features or custom modules on GitHub. It has to be in its own repository like: https://github.com/SU-SWS/stanford_bean_types
  2. Add `project status url` to the .info file of the module that is the full url to the GitHub page. Look at this example.
  3. Set the version number to something that conforms to the Drupal 7 version naming scheme, e.g.: 7.x-1.0-dev or 7.x-1.1
  4. Create a new release via GitHub's release system using the same name as the version.
  5. Download and enable the ERUS module in your Drupal site.
  6. (optional) If the repository is private you will need to enter a GitHub username and password that can read from the repository.
  7. Visit the update status page in your Drupal site to see the versions.

You can also use Drush to view and update the modules.

For more information on the installation, configuration, and extension of this module, please visit: https://github.com/SU-SWS/erus and https://github.com/SU-SWS/erus/blob/7.x-1.x-dev/plugins/README.md

Example of ERUS in action

Update Status Bar

Credits

Thanks to Zach Chandler and Brian Wood for their contributions to this project.

Shea McKinney Posted by Shea Ross McKinney on Thursday, November 12, 2015 - 12:19pm

Creating a Drupal Feature is easy. Creating a Drupal Feature that everyone can use is really hard.

Features, as in the Features module, allows you to very easily export something that you have created in a Drupal website in order to use it in another. This is an extremely powerful tool that is the basis for how many Drupal websites are built. Not only can you share functionality from one site to the next you can use it to start new websites. Over time as the functionality changes, or matures, the Feature can be updated with new items and then re-deployed to the sites that currently using the Feature so that they get the new changes and are not left behind.

Drupal by nature is a complex mess of data and configuration that could either be in code or in a database. Drupal 8 is making significant leaps and bounds to wrangle this mess but it won't solve all of our problems. In Drupal 7 it is possible to have parts of a Feature in code and parts of a Feature in the database. Modern day site building strategies have determined that having all site configuration in code is the best way to build and maintain a website. Many agencies, teams, and shops, use this strategy, very successfully, to maintain their organization's website or websites. They are able to make changes to their Drupal site Features on a local, or development, environment and use Features to export, or re-export, the code to deploy to a production environment. This works well when you have a developer, or team of developers, working to maintain a handful of similar websites. This does not work well if you let site builders make changes through the UI on the live site.

Unleash the UI!

At Stanford we heavily use Features to create our websites. At the time of this article we have over 1600 websites in production and we have several products (distributions) that we maintain. Our products share many of our Features and use them in slightly different ways. When we deploy a website for a department or group, at Stanford, we hand over the keys to the UI to their team. The other departments team is able to create content, Views, Contexts, View Modes, and more. Any seasoned Drupal builder should be shouting "But think of the Features!" right about now. Yes, with handing over the UI, site owners can override Features extremely quickly. 

As a developer, overridden Features are a big red flag of unpredictability. They say "Don't touch me. I am a special snowflake now." They are no longer the Feature we once knew and we don't know what they have become. This is a problem because now some of the configuration is in the database and some of the configuration might still be in the code.

Finding the common ground

Finding a common set of configuration for anything with a large set of players is difficult, if not impossible. When you whittle down the feature requests, needs, and wants, then take away the "specific to one" functions, you are left with a very small, and most often, not very useful set of features. Do you still try to find this common set? Is it worth the effort? Who does it benefit? Yes, but finding the common ground is not as important. What is important is that the feature either can support the special or unique parts through configuration or accelerates the development of such.

How do you build the feature(s)?

We have four scenarios that we need to account for when building a Feature.

  1. For use in any Drupal website (public)
  2. For use in our (Stanford Web Services) distributions
  3. For use in a client specific distribution or custom website
  4. KIT Compliance (sorta)

This means that not only does the feature have to work and look good and the Drupal contributed module level but also have the special sauce required for specific distributions. We need to support things like namespaces, encapsulation, configurability, and upgradability. 

A. Namespaces

Naming things is hard. Naming something so that it doesn't collide with other similarly named things is even more difficult. With Features in Drupal 7 this means prefixing everything with a namespace. From your content types to your fields to your views, it is important that you prefix the machine names with your namespace. 

Example for a page content type:

content type label:         Page
content type machine name:  stanford_page
fields:                     field_s_page_fieldname
views:                      stanford_page_view
context machine name:       stanford_page
context group:              Stanford Page

B. Encapsulation

Feature encapsulation is extremely important. De-coupling the feature from themes, other features, and site specific features is key to the ability to use it anywhere. Even though you can use a Feature anywhere, there still is the expectation that it will do something and look good out of the box. Providing sensible defaults is a way to support the out of the box issue but as the word default implies, there needs to be a way to change or override them. Some things that change often are:

  1. Permissions
  2. Display settings, theme options, view modes, or page layouts
  3. Fields
  4. Front end and administration views

Keeping Feature modules de-coupled is important. A good example for when it would be a good idea to break something out in to its own module would be when a content type has a entity reference field to an entity type that is also in a Feature module. This relationship field would create a direct relationship and dependency on the other feature module. If possible, it would be a good idea to break out the entity reference field into its own module so that the core content type and fields could be used as stand alone.

C. Configurability

Allowing for site specific configuration within a Feature module is important for its success and adoption. Allowing a feature to have options can be as simple as breaking out choices in to their own separate Feature module. This would allow site administrators, or installation profiles, to choose which options they want to turn on. 

Scenario #1: I want one of the bundled options
For example, if you created a page feature that could have sidebar blocks on the left or the right columns, you could separate the two options in to feature modules of their own and allow individual sites to enable one or the other.

Scenario #2: I don't want any of the bundled options
For example, if you created a page feature with a separate Feature module for the node display, as a site builder I could choose to disable, or not enable, the Feature module with the node display in it. This would allow the site builder to then build any layout for the page feature without overriding the Feature.

Scenario #3: I want what is bundled but slightly different
Creating separate Feature modules for choices is not always a fit as it is an on or off strategy. Sometimes you want more options and for that you will need to have a developer create a configuration page/form/place for the additional choices. Adding modules and custom code to hook into existing features to make the changes will prevent a feature from being overridden. 

D. Upgradability

It is important that as a Feature changes over time that older versions of the Feature do not get forgotten. Newer versions of a feature should not break older versions. For example, if a field called field_s_page_type gets renamed to field_s_page_category, an update hook will need to be written to handle this change. This will include moving all of the data in the previous field name in to the new one. 

The general rule: New versions cannot break older versions. 

E. General practice

  • Stay as close to KIT compliant as possible.
  • Never put content in a Feature. There are better methods for creating content.
  • Content includes but is not limited to:
    • Taxonomy terms
    • Menu items
    • Block classes
    • Nodes
    • Users
  • Never use a strongarm variable that is used outside of the feature it is intended for.
    • For example, Display suite exports a variable for field to block that stores information for all content types. This breaks the encapsulation rule and cannot be used.

Working with shared features

The above article details how to create and maintain Features in Drupal. Please read part two of this topic by navigating to and reading "Tips on how not to bork your features". The second part is geared towards site builders working with a site built with shared features.

Shea McKinney Posted by Shea Ross McKinney on Thursday, February 26, 2015 - 11:00am

Drupal comes with a built in search module that provides some pretty basic search options. Like everything else we do, we ask ourselves, "how can this be better?" One of our recent projects has given us the opportunity to evaluate some new options for search. 

Note: This evaluation has a specific case of needs/wants and my notes on them are below. 

Wish list, in order of importance

  1. Facets
  2. Grouping of results by content type
  3. Autocomplete or recommendations
  4. Spellcheck 
  5. Biasing or the ability to order the results outside the natural result set
  6. Search analytics
  7. How easily can I wrap up all the configuration into an installable module or profile?

What is out there? What's Hot?

The first thing to decide on is what engine is going to power the new search. Turns out there are a few options. Through my research I have found out that there are some big differences and specific use cases for each of the following available search options.  I found this resource which does a good job at comparing and illustrating the search options available for Drupal. It is important as a developer to look at the industry to see what is happening in the space you are looking at. Looking beyond the download counter on the module page, and/or seeing which one has active commits is important. If my selection were based purely on those two credentials, Search API DB and Solr would be clear winners. These two, although good options for some, might not hit the needs of your project. What I found was that one particular engine was stirring up some waves. 

Contrib search options for Drupal:

  • Acquia Solr
  • Other Solr
  • Search API with search db 
  • Sphinx
  • Elasticsearch
  • Fuzzy Search 
  • Xapian
  • Sarnia
  • Google Custom Search
  • Custom written extensions of Drupal core search
  • Fake it with views and exposed filters

What are these things?

Search API is a collection of modules that allow site builders to build out advanced full text search solutions for Drupal. Right from the project page itself:

"This module provides a framework for easily creating searches on any entity known to Drupal, using any kind of search engine. For site administrators, it is a great alternative to other search solutions, since it already incorporates faceting support and the ability to use the Views module for displaying search results, filters, etc. Also, with the Apache Solr integration, a high-performance search engine is available for this module.

Developers, on the other hand, will be impressed by the large flexibility and numerous ways of extension the module provides. Hence, the growing number of additional contrib modules, providing additional functionality or helping users customize some aspects of the search process." (Drupal.org project page)

Solr is a super fast, open source, standalone search engine built off of Lucene with Java. Solr needs to be installed and run as its own entity. For a full list of features and information check out Solr's features page. It is one of the most used and developed on search engines in the Drupal space. Drupal.org uses it to power its search as well as the project browsing pages. The Drupal community has been very active around this search solution and has provided several implementation methods as well as a number of contributed modules to make it better. Acquia has provided support and development for a number of modules that use Solr as well as offers a hosted Solr solution. There are a number of hosted solutions out there if you cannot install, or do not want to support, your own Solr instance. For example, if you are hosted on a shared hosting platform you will probably want to go with a hosted solution. 

Search API Database Search is a PHP/Database solution. "It is therefore a cheap and simple alternative to backends like Solr, but can also be a great option for larger sites if you know what you're doing." (Drupal.org project page) This module is built for the Search API module search solution. It provides a much stronger search than the out of the box Drupal core search offerings and can be used on any Drupal website and hosting environment. Because of its underlying technologies this module can perform poorly on highly trafficked and large websites. For large scale websites with lots of content this option could potentially eat up your servers resources and slow down the website. 

Sphinx is for enterprise or massive scale websites.  This search engine powers Craigslist, which claims to have over 300+ million search queries/day.

"Sphinx is an open source full text search server, designed from the ground up with performance, relevance (aka search quality), and integration simplicity in mind. It's written in C++ and works on Linux (RedHat, Ubuntu, etc), Windows, MacOS, Solaris, FreeBSD, and a few other systems." (sphinxsearch.com)

Elasticsearch looks to be the new hotness. This is the product that is making waves in the search community. So big are the waves a Solr hosting service has taken the time to address it. It is fast, feature rich, has a sexy UI, and comes with a number of extra tools the provide valuable information and functionality. Like Solr, Elasticsearch is a standalone search that needs to be installed on its own and have Drupal's modules use it. There are also a number of hosted solutions available. This search engine was built from the ground up with the cloud in mind. 

"Elasticsearch is a flexible and powerful open source, distributed, real-time search and analytics engine. Architected from the ground up for use in distributed environments where reliability and scalability are must haves. Elasticsearch gives you the ability to move easily beyond simple full-text search." (elasticsearch.org)

The extra tools that come with Elasticsearch are:

  1. Logstash, a time based event data logger.
  2. Kibana, a data visualization tool that gives you dashboards of information in real time about the data being indexed.
  3. Marvel, a deployment and cluster management tool that provides historical and real time information about your Elasticsearch servers.

Fuzzy Search is similar to Search API Database Search in that it is also a PHP/Database solution. It can be installed on to any Drupal website and integrates with Search API. 

Fuzzy matching is implemented by using ngrams. Each word in a node is split into 3 (default) letter lengths, so 'apple' gets indexed with 3 smaller strings 'app', 'ppl', 'ple'. The effect of this is that as long as your search matches X percentage (administerable in the admin settings) of the word the node will be pulled up in the results.

Although, a likely candidate to rival the Search API Database Search this project looks to be stale. There is no stable release for Drupal 7, the last commit was in 2013, and the module status is 'seeking new maintainer'.  It does have a decent install base with over 2100 websites, but the lack of development is discouraging.

Xapian "is an Open Source Search Engine Library, released under the GPL. It's written in C++, with bindings to allow use from Perl, Python, PHP, Java, Tcl, C#, Ruby, Lua, Erlang and Node.js."  (http://xapian.org/) Xapian's strength looks to be in document indexing; Specifically large file size documents. **Full disclosure** I did not get around to testing but here is a link to a video where Simon Lindsay talks about the project. You can see his part at the 11:50 minute mark.

Xapian is a highly adaptable toolkit which allows developers to easily add advanced indexing and search facilities to their own applications. It supports the Probabilistic Information Retrieval model and also supports a rich set of boolean query operators.
 
Sarnia "allows a Drupal site to interact with and display data from Solr cores with arbitrary schemas, mainly by building views. This is useful for Solr cores that index large, external (ie, non-Drupal) datasets that either aren't practical to store in Drupal or that are already indexed in Solr." (Drupal.org project page)
 
This looks really cool but was outside the scope for my testing. I would love to hear more about it.
 

Google Custom Search is different from all of the above search options as it is an embedded search engine that you get from Google. It uses a crawler and sitemap.xml data to crawl through your website to provide Google like searching of your website. The downside to this option is that it does not provide the type of configuration of what to index and how to display results that I want. It is a great option for a quick and easy search solution.

Custom coding is always an option if you have the expertise and time available. However, Drupal is open source software with many viable search options. It would be silly to not pick a project that has already been started to use or build upon.

Faking it with views exposed filters is a fast, cheap, and "not a real search but sometimes good enough" (Shea McKinney) solution. If you are looking for exact keyword matching or simple filtering this may be a less resource intensive option. Views exposed filters should not be viewed as a complete search option. 

Quick elimination

Now that I know what the playing field is it is time to make the first round of cuts. Here are some quick notes on the decisions why I chose to remove a few options for the list.

  • There are many contrib modules that provide the added functionality we want and it would be far more effort to write our own. Going fully custom won't be needed here.
  • Views with exposed filters would allow a lot of control over display of results but are field based and cause problems quickly when there are multiple content types in play.
  • Google custom search or other 3rd party crawlers do not provide enough control over display, don't support facets, and can only index publicly available content.
  • Xapian looks promising but is not as feature rich as other options and requires PHP libraries to be installed on the server.
  • Sarnia looks interesting and is built on Search API and Solr but is best used for large amounts of external Solr data and is probably more than we need.
  • Sphinx is very fast because it uses real-time indexes. Its best use case would be for a site has hundreds to thousands of new entities created an hour and needs to have content instantly searchable. Our typical use case would not have this volume of new content on a regular basis. 
  • Local Solr setup is not an option as we do not have the resources to set up and maintain a Solr search server in our environment.
  • The Fuzzy search project is looking for a new maintainer and although this project could be a good opportunity to pick up and help a contrib module there are other more interesting projects out there.

A closer look

After reviewing all of the options above it looks like there are a few really good choices. It was time to put them through their paces. To test, I decided to install our base distribution which comes with a number of contributed modules and a few content types, and configure from scratch 1 index for all content types, inclusion of 6 different field types, author and taxonomy relationships, search field biasing, facets, and autocomplete. From there I generated roughly 200 taxonomy terms and attached them to roughly 7000 nodes of varying content types. I ran indexing immediately and selected a few nodes for my target searches. I searched for those nodes on multiple fields using multiple keywords and compared the results to my arbitrary values of relavancy. 

Below is a feature breakdown table.

Name Cost Features Cons
Search API FREE, as in beer.
  • Autocomplete (contrib)
  • Search live results (contrib)
  • Saved searches (contrib)
  • Range searches (contrib)
  • Search sorting (contrib)
  • Location searching (contrib)
  • Search Pages or Search Views (contrib)
  • Search statistics (contrib)
  • Multiple search indexes
  • Integrates with views
  • Index entities immediately or on cron
  • Multiple index searching
  • Add on modules to the Search API module have the feel of being buggy. This is a shame since the Search API module itself is exceptionally well maintained.
  • Spellcheck project is 4 years old with no commits.
Search API + Search Database FREE, as in beer.
  • All of the Search API features and...
  • Install anywhere you have Drupal installed
  • Result biasing
  • Search facets (contrib)
  • Portable / Migratable
  • Less accurate and powerful than Solr or Elasticsearch
  • Needs Apache Tika installed to index files
Search API + Externally hosted Solr

Cheapest: $10.00/Month

Most expensive: Thousands of dollars a month

  • All of the Search API features and...
  • Fast searching
  • Result biasing
  • Search facets (contrib)
  • Portable / Migrateable
  • Can index files
  • Can take some time to index many and/or large documents
Search API + Externall hosted Elasticsearch

Anywhere from $37.00/month 

to thousands of dollars a month

  • All of the Search API features and...
  • Fast searching and indexing
  • Search facets (contrib)
  • Multiple transports (Curl, Guzzle, Thrift, Memcached)
  • Bonus software and monitoring tools
  • Can index files
  • During testing I periodically dropped, created, and copied settings around to various different test urls and environments. I had some issues with the database index machine names.

Search API autocomplete vs Search API live results

Search API autocomplete provides an autocomplete search field that displays the keyword or matching keywords the user is typing plus the number of results that keyword would return in a drop down box off of the search field. The live results module does roughly the same except that it displays search results and not keywords in the drop down box. Clicking or selecting a search result from the drop down takes the user directly to that search result page. 

Winner for use on a search page: Search API Autocomplete

Decision

The status of search in Drupal is good. There are a number of powerful and easy to implement options out there. Search API is leading the way. It empowers a site to move past Drupal core search and utilize a full text search option. For us, and our client's needs, we will be looking to build out a graduated option. For the most common use case a search implementation with Search API + Search API Search Database will be sufficient. For those sites that need something more robust a migration path to Search API + Solr will be used. 

Why did we choose Solr over Elasticsearch? 

It was the slimmest of margins that allowed Solr to top Elasticsearch. Looking specifically at our needs, this breakdown will discuss the points we valued as important.

Functionality

Our needs are simple. We want our search engine to provide accurate results, facets, excerpt snippets, and possibly the ability to index raw files. Both Solr and Elasticsearch performed these operations very well.

Performance

Testing on several remote services and standing up local instances of each search appliance, at our scale, with one index, the performance of each was very similar. Elasticsearch out-performed Solr in this area due to indexing waits on the Solr hosts where Elasticsearch was instantaneous. 

Ease of setup and use

As we decided to go with 3rd party hosting options the setup and connection for each option was very similar. Both options have easy to follow configuration options in their respected module. 

3rd party options

There are several options for using 3rd party Solr hosts including Acquia. Generally, Solr hosting has the cheaper options but both scale to the thousands of dollars a month range.

Project activity

Strictly looking at the momentum in the Drupal community it looks like Elasticsearch has come a long way recently. Solr stands out as the most developed for and comes with a number of contributed modules and features. Solr looks to be the more mature project but Elasticsearch is looking to be making great headway. With some great features being developed, keep your eyes on Elasticsearch and it's progress.

Support

With a number of groups on campus already using Solr as their search engine of choice it makes sense to also use Solr. Not only will we be inline with the rest of the campus, but we will also have resources available should we need some extra support.

Resources:

Shea McKinney Posted by Shea Ross McKinney on Monday, October 27, 2014 - 8:30am

In this article we are going to talk about what hard and soft configuration is and how to decide between the two when creating a distribution.

At Stanford Web Services we have several products, or Drupal distributions, and we build them with contributed modules, installation profilesFeatures, and custom modules. When we design a product we have to make decisions on what should never change and what can, or will, change but should have a sensible default when the site is initially installed.

If you are a developer or site builder and are working with or creating a distribution, please read on.

What is hard vs. soft config?

  • Hard configuration are settings that should never change throughout the lifetime of the website. 
  • Soft configuration are settings that can change or be removed but provide a sensible default or starting point. 

The what

Using ordering a Classic Club sandwich off the menu at Quiznos as an example, we can say the sandwich is the product. The sandwich has hard configuration on what types of protein it comes with (ham, turkey, & bacon), and soft configuration on what types of toppings are included. Quiznos suggests that you should eat the Classic Club with cheddar cheese, tomatoes, lettuce, and mayo. As an end user you may not like tomatoes and would rather have olives and mustard added to the sandwich. 

Much like Quiznos, when we are designing our products we have to decide what things an adminstrator, or user can change, and what they cannot. For those items that a user can change we should provide them with an out of the box set of options that appeal to the majority of users. An example of hard configuration in our products would be our WYSIWYG and text formats configuration. We have established a set of options and formats that should be consistent throughout all of our products. An example of soft configuration would be user roles and permissions. We provide default out of the box configuration that makes sense for most websites, but administrators may want to change which content types their staff have access to and which they do not. 

The how

Ok, great. So now you understand what hard vs. soft configuration is, how do you get them into your product? Your first guess may be Features and you are right. Features is great for packaging up settings and putting them into code. Settings in code is a great way to store hard configuration. But wait, you can override features through the user interface. Shouldn't hard configuration be something that never changes? You are right again, hard configuration should be set in stone, so when working with Features you have to be very careful to not allow administrator users access to the sections where they may change your hard configuration. The good thing about Features is that if the configuration ever does change you can quickly revert back to the original state using Feature's revert functionality. Another way you can write hard configuration is directly into your custom modules. You may use custom modules to hard code items like, content types, entities,  and views.

If Features is good for hard configuration is it also good for soft configuration? The answer for this will differ based on who you ask but our opinion is not really. The answer depends on the development workflow or site building practices in use. Features can be overridden through the UI and therefore, has the ability to provide sensible defaults that users can change. If your site building practices include re-exporting and keeping the Feature modules up to date with the latest changes you may use them effectively and efficiently. When designing a distribution we have to keep in mind that not everyone will keep Features up to date when building out their website. We also do not want to pigeonhole our site builders into this process or to force them to try and abstract the parts of the feature they want to disable. A better option for soft configuration would be to write some input once only code. This would be code that is either in an installation profile install task, or in a custom module's hook_install(). By putting the soft configuration into one of these two areas we can safely provide out of the box settings that are usable and changeable by site administrators. If the site administrator decides later they want to put their new changes into a Feature they may without running into conflicts with other features. 

Kit compliance  is important for creating re-usable features and does a good job at determining the items that can, or should, go into a feature and which items that should not. Those items that should not are a good example of something that can be configured as soft configuration. For example, Kit compliance allows for user roles and permissions that are directly related to the feature itself, but does not allow for roles or permissions established by some other contributed modules. If you wanted to set the permissions that are not directly related to your feature you may do so in an installation profile install task. This task will run once during the installation of the site. By setting permissions in an installation task you lose the ability to revert the configuration through features, but you gain flexibility in that you can change this setting later without worry.

Please check out these resources for more information on how to create a distribution:

Shea McKinney Posted by Shea Ross McKinney on Tuesday, July 22, 2014 - 9:56am

Small commits allow for big wins.

Something that I have been using a lot lately is GIT's cherry-pick command. I find the command very useful, and it saves me bunches of time. Here is a quick lesson on what it does and an example use case.

What is GIT cherry-pick? man page

Git cherry-pick allows you to merge a single commit from one branch into another.  To use the cherry-pick command follow these steps:

  1. Check out the branch into which you want to merge the commit. (E.g.: git checkout master)
  2. Identify the commit hash through your favorite method. You can use git log, a GUI tool such as sourcetree or tower, or if you use GitHub or BitBucket you can use their interface. In SWS we use GitHub, so I tend to use that method often. With GitHub you can find the commit hash on the commit list page or on the individual commit page itself.  See the screenshots below.
  3. Pick'em! eg: git cherry-pick 9638d99c89a92350ad2f205f47af24b81ac39a64
  4. If there is a merge conflict you will have to resolve that with your favorite merge tool, but otherwise you have now successfully pulled in the work from the other branch. 

Why would I use this?

That is a good question. Why would you cherry-pick instead of just merging one branch into the other? The short answer is that you cherry-pick when a full branch merge is not possible due to incompatible versions or simply because you do not want everything in the other branch. 

A common use case for Drupal module maintainers is when a security vulnerability has been identified and fixed in a Drupal version number and has to be applied to others. For example, if a security fix has been found in the Drupal 6 version of the stanford_events module, then that fix may apply to the Drupal 7 version. Instead of having the developer apply the change manually to the Drupal 7 version that developer can use Git cherry-pick for commit and carry on. 

How to find the commit hash on GitHub:

Shea McKinney Posted by Shea Ross McKinney on Monday, June 2, 2014 - 9:00am

Sometimes it is about the small things. Something missing in Drupal's date popup field was a time popup. The calendar popup is very useful and very user friendly but it's sister field, the time field, is not. 

Just give me the goods: Drupal Module Project Page

Time Picker

Drupal time - out of the box

The out of the box configuration of a date field allows you to specify time granularity all the way down to seconds. If you choose a time granularity of anything smaller than day you will be presented with two input fields. One input field for the month, day, and year and another for the hour, minute, and seconds.

Awkward input UI

So great, the user has two fields, but when the user clicks into the time field to set the time value they are forced into some hidden functionality. The date popup field breaks up the hour, minute, and seconds portions of the input by using javascript and forcing a highlight on a single portion of that input field only.

For example, when the user clicks into the time field only the hour portion is highilghted. The user can then start to type the hour or use the arrow keys to increase or decrease the time. "That doesn't sound so bad," you may be thinking, but this breaks in a couple of places.

What if the user wanted to change only the minutes? They would have to be very careful about clicking into the right portion of the field to select only the minute numbers or else they will be defaulted to hours. What if the user had a date in their clipboard that they wanted to paste in? This is trouble as well. It is difficult to select the whole contents of the field and paste in a new value. So what did we do to fix this?

A new timepicker to improve UI

We created a small module to fill in the gap that the date popup field left us. And what better way to compliment a date popup field, but with a time popup. We selected a timepicker plugin that had a good range of options and settled on using the drop down select fields display. We then overrode the default time input for the date popup fields so that this plugin would display.

This plugin requires very little setup and minimal configuration. For installation please see the Drupal module page or our GitHub Page.  Once you have downloaded and enabled the module by default all date fields with the date popup widget should recieve the new drop down time select. The drop down select items will automatically match your fields' configuration on date format. Download, install, and forget. 

Happy Picking!

 

Plugin Page: http://trentrichardson.com/examples/timepicker/

Drupal Module Page: https://drupal.org/project/stanford_date_timepicker

 

Shea McKinney Posted by Shea Ross McKinney on Monday, February 17, 2014 - 9:07am

Views offers the ability to expose filters to the end user so they may filter and sort through a views listing to find what they want in a large list of content. If you have used exposed filters before you will be familiar with exposing a filter on a specific field, such as the title field, for example. But what if you want the end user to be able to search in multiple fields at the same time?

Before we dive in, if you're unfamiliar with the terminology and functionality of views in Drupal, please visit the Views module documentation.

So to create an exposed filter that can let users search multiple fields at the same time, my first thought was to create a custom module with a views_query_alter() hook that would allow me to join fields together. This is not needed, however. Views 7.x-3.x allows you to expose one filter field to the end user for multiple field searching. Views offers a 'Global: Combine filter fields' option that allows you to pick one or more fields to filter on.

Example

In a recent use case in a website we are building we had the following issue. In the screenshot below there is an exposed filter block to the left with an exposed title filter. This field allows end users to search for a title in the view listing to the right of the exposed filter block. This field works great if the user knows the full title of the content they are looking for but in practice many of these items are referred to with a short acronym. For example, the Achievement Rewards for College content item is also known as "ARCS." When using the title search field we would like the correct content to show up even if the user searches for ARCS. To do this we need a combined filter.

 

VGPE Fellowship Exposed Filter View

VGPE Fellowship Exposed Filter View

View Setup

The following example will assume that you know the basics of creating views in Drupal.

  1. The first step is to add the new combined exposed filter to your view. To do that click on the add button in the filter criteria box and wait for the overlay box to load.
  2. Once it has loaded type in the word 'global' into the search field at the top of the overlay window. You should be presented with at least one option.
  3. Select the 'Global: Combine field filter' option and click the apply button.
  4. Now the filter field has been added you will need to configure it to be exposed to the user as well as select the fields you wish to combine search on.
    Click add
    Select Global
  5. Expose the filter to the end user by checking the exposed filter checkbox.
  6. Ensure you add a title for the user that is descriptive. The description field is optional.
  7. The operator select box provides you with several options on how the search should be performed. For this example we will use the contains option as is provides us with the most results.
  8. Select all of the fields that this one filter will search. Below the fields Title and Tagline are selected. You may select more then two fields.
    More settings
     
  9. Click apply and save the view. That is all the configuration this view needs. Now you can create multiple search fields with custom results.

 

Posted in:
Subscribe to Shea Ross Mckinney's Blog Posts