Monday, December 27, 2010

script_finder 1.0

Great news, I got my changes into script_finder and Tobias (@tcrawley) updated the version and release gem to 1.0.0.

Updated gem release

Later this week, considering how slow work is going to be, I am contemplating attacking the known issues for the gem and figuring out how to get rails to specify what commands it supports and strip them off of the command line return.

And I am super stoked considering this is my first solo contribution to the community!

Thursday, December 23, 2010

Tobias' script_finder gem

I took a quick break from coding my web page to help myself out. I cut a branch of script_finder, a gem that I have become addicted to in recent months. There was a minor issue with it. It was not wired to work with Rails 3.

I did some grinding on it and sent Tobias an email or twelve letting him know what I was doing as well as asking for feedback. I did a "final" commit today and send up a pull request. Lets see what happens.


Tobias script_finder

Tuesday, December 21, 2010

My Chain Maille project's conversion to Rails 3

This week I took the time to upgrade my new project from Rails 2 to Rails 3. Thank your respective deity that I did so. There are a few complaints. I miss having to manually hack in bundler's gem management as well as the long complicated routes file. Where, I ask you, will I spend those hours debugging that used to boil down to a bad route configuration? Where? I shutter at the thought of being able to read my routes file and not have to fight with gem versions.

One thing that truly bothered me, a minor annoyance but one none the less, was that script_finder no longer works. I think I will have to fork the project and provide a Rails 3 version, that is of course, unless someone can provide me with a link to the Rails 3 friendly version.

Friday, December 17, 2010

Polymorphic Paths

One of my friends directed me to this post. Great info on how to use polymorphic paths to save yourself some confusion and work.

http://rookieonrails.blogspot.com/2008/01/sti-views-revisited-or-polymorphic.html

Thursday, December 16, 2010

Great Quote

I have been looking for this quote for some time:

Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.

-- Brian Kernighan

Monday, December 13, 2010

Pluralization Customization in Rails 2

Got stuck with a bit of an issue:

Rails thinks that weave - weaves and that weaves - weaf
Exactly what a weaf is... well I don't know either.

Fix:

ActiveSupport::Inflector.inflections do |inflect|
inflect.irregular 'weave', 'weaves'
inflect.irregular 'weaves', 'weave'
end


in your config/initializers/inflections.rb

Chainmail

I got bored over the course of last week and this weekend and decided that my usual chainmail site is starting to annoy me. It does not provide the reference I want for either my clients or for myself. I have a hard time remembering from order to order what the AR I want for my Elf Weave is. So, new project. Should be interesting and give me something to let the guy I am mentoring learn on.

Links to follow soon.

Thursday, December 2, 2010

Why Work Does Not Get Done at Work

I saw this feed come through on Twitter and listened to the talk. It is brilliant. Everyone should watch this, managers should watch it twice, then write it down, and think about getting some of the transcript tattooed onto their arms so they can see it at work everyday. Thanks TED

TED No Work At Work

Thursday, November 18, 2010

Vim Tricks and Toys

So after yesterdays rant about VIM and its usages I figured that I should talk about some of the toys that I use with VIM that I love.

- Spell check
- Completion
- Auto-completion
- Surround
- Ack
- Ruby Compiler
- Ctags
- Omni Completion
- Nerd Tree
- Fuzzy Finder
- NERD Commenter
- Command T
- Vicle

Yep, list I know. The configuration that goes with it is surprisingly short. God knows that the spell check plug-in is one of the most important to me, considering I spell about as well as a third grader. Auto-completion is brilliant as long as you know how to use it. There are some similar plug-ins that provide language specific shortcuts. These are pretty slick as long as you make sure you are using the one for your current programming language. Using the Perl one with Ruby has interesting results.

There are also a few things that you can do without changing your configuration file. One of the nice ones is
:! irb -r
it executes irb from inside of VIM. Makes testing changes on the fly easy. Also makes it very quick to run commands like
:! script/console generate migration
Right after that you can refresh your NERDTree and keep working.

Here is my vimrc.


"Before merge of files these existed

set cf " Enable error files & error jumping.
set clipboard+=unnamed " Yanks go on clipboard instead.
set history=256 " Number of things to remember in history.
set autowrite " Writes on make/shell commands
set timeoutlen=250 " Time to wait after ESC (default causes an annoying delay)

set nocp
set cinoptions=:0,p0,t0
set cinwords=if,else,while,do,for,switch,case
set formatoptions=tcqr
set cindent
set autoindent
set smarttab
set expandtab
set wrap

" Visual
set showmatch " Show matching brackets.
set mat=5 " Bracket blinking.
set list
" Show $ at end of line and trailing space as ~
set lcs=tab:\ \ ,eol:$,trail:~,extends:>,precedes:< "set novisualbell " No blinking . set noerrorbells " No noise. set laststatus=2 " Always show status line. " ----------------------------------------------------------------------------- " | VIM Settings | " | (see gvimrc for gui vim settings) | " | | " | Some highlights: | " | jj = Very useful for keeping your hands on the home row |
" | ,n = toggle NERDTree off and on |
" | |
" | ,f = fuzzy find all files |
" | ,b = fuzzy find in all buffers |
" | ,p = go to previous file |
" | |
" | hh = inserts '=>' |
" | aa = inserts '@' |
" | |
" | ,h = new horizontal window |
" | ,v = new vertical window |
" | |
" | ,i = toggle invisibles |
" | |
" | enter and shift-enter = adds a new line after/before the current line |
" | |
" | :call Tabstyle_tabs = set tab to real tabs |
" | :call Tabstyle_spaces = set tab to 2 spaces |
" | |
" | Put machine/user specific settings in ~/.vimrc.local |
" | CTAGS C-] - go to definition |
" | C-T - Jump back from the definition. |
" | C-W C-] - Open the definition in a horizontal split |
" | C-\ - Open the definition in a new tab |
" | A-] - Open the definition in a vertical split |
" |
" | After the tags are generated. You can use the following keys to tag into and tag out of functions:
" |
" | Ctrl-Left_MouseClick - Go to definition |
" | Ctrl-Right_MouseClick - Jump back from definition |
"

set nocompatible " We're running Vim, not Vi!

"Set Mapping to ,
"**********************************************************
let mapleader = ","
imap jj "Use jj as escape .. Eaiser?


" Tabs ************************************************************************
"set sta " a in an indent inserts 'shiftwidth' spaces
function! Tabstyle_tabs()
" Using 4 column tabs
set softtabstop=4
set shiftwidth=4
set tabstop=4
set noexpandtab
autocmd User Rails set softtabstop=4
autocmd User Rails set shiftwidth=4
autocmd User Rails set tabstop=4
autocmd User Rails set noexpandtab
endfunction

function! Tabstyle_spaces()
" Use 2 spaces
set softtabstop=2
set shiftwidth=2
set tabstop=2
set expandtab
endfunction

call Tabstyle_spaces()

set ts=2 " Tabs are 2 spaces
set bs=2 " Backspace over everything in insert mode
set shiftwidth=2 " Tabs under smart indenting

"Ctags and other shortcuts
"***********************************************
map :tab split:exec("tag ".expand(""))
map :vsp :exec("tag ".expand(""))

"Sets the tags directory to look backwards till it finds a tags dir
set tags=tags;/

au BufWritePost *.rb silent! !ctags -a --recurse -f ~/dev/tags/cuttlefish &



"Indenting *******************************************************************
set ai " Automatically set the indent of a new line (local to buffer)
set si " smartindent (local to buffer)

" Scrollbars ******************************************************************
set sidescrolloff=2
set numberwidth=4

" Windows *********************************************************************
set equalalways " Multiple windows, when created, are equal in size
set splitbelow splitright

" Vertical and horizontal split then hop to a new buffer
:noremap v :vsp^M^W^W
:noremap h :split^M^W^W

" Cursor highlights ***********************************************************
set cursorline
"set cursorcolumn

" Searching *******************************************************************
set hlsearch " highlight search
set incsearch " Incremental search, search as you type
set ignorecase " Ignore case when searching
set smartcase " Ignore case when searching lowercase

" Colors **********************************************************************
"set t_Co=256 " 256 colors
set background=dark
syntax on " syntax highlighting
colorscheme ir_black

" Status Line *****************************************************************
set showcmd
set ruler " Show ruler
"set ch=2 " Make command line two lines high
match LongLineWarning '\%120v.*' " Error format when a line is longer than 120


" Line Wrapping ***************************************************************
set nowrap
set linebreak " Wrap at word

"Line Number
"*****************************************
set nu " Line numbers on


" Misc settings ***************************************************************
set backspace=indent,eol,start
set number " Show line numbers
set matchpairs+=<:>
set vb t_vb= " Turn off bell, this could be more annoying, but I'm not sure how
set nofoldenable " Turn off folding
set noerrorbells


" File Stuff ******************************************************************
syntax enable
filetype on " Enable filetype detection
filetype indent on " Enable filetype-specific indenting
filetype plugin on " Enable filetype-specific plugins
"compiler ruby " Enable compiler support for ruby

" Ruby stuff ******************************************************************
compiler ruby " Enable compiler support for ruby
map :!ruby %


" Omni Completion *************************************************************
autocmd FileType html :set omnifunc=htmlcomplete#CompleteTags
autocmd FileType python set omnifunc=pythoncomplete#Complete
autocmd FileType javascript set omnifunc=javascriptcomplete#CompleteJS
autocmd FileType css set omnifunc=csscomplete#CompleteCSS
autocmd FileType xml set omnifunc=xmlcomplete#CompleteTags
autocmd FileType php set omnifunc=phpcomplete#CompletePHP
autocmd FileType c set omnifunc=ccomplete#Complete
" May require ruby compiled in
autocmd FileType ruby,eruby set omnifunc=rubycomplete#Complete


" Hard to type *****************************************************************
imap uu _
imap hh =>
imap aa @

" Change which file opens after executing :Rails command
" ****************************************
let g:rails_default_file='config/database.yml'

" Insert New Line *************************************************************
map O " awesome, inserts new line without going into insert mode
map o
"set fo-=r " do not insert a comment leader after an enter, (no work, fix!!)


" -----------------------------------------------------------------------------
" | Plug-ins |
" -----------------------------------------------------------------------------

" NERDTree ********************************************************************
:noremap n :NERDTreeToggle
let NERDTreeHijackNetrw=1 " User instead of Netrw when doing an edit /foobar
let NERDTreeMouseMode=1 " Single click for everything


" NERD Commenter **************************************************************
let NERDCreateDefaultMappings=0 " I turn this off to make it simple

" Toggle commenting on 1 line or all selected lines. Wether to comment or not
" is decided based on the first line; if it's not commented then all lines
" will be commented
:map c :call NERDComment(0, "toggle")

" CommandT ********************************************************
" To compile:
" cd ~/cl/etc/vim/ruby/command-t
" ruby extconf.rb
" make
let g:CommandTMatchWindowAtTop = 1
map f :CommandT


" fuzzyfinder ********************************************************
" I'm using CommandT for main searching, but it doesn't do buffers, so I'm
" using FuzzyFinder for that
map b :FufBuffer
"let g:fuzzy_ignore = '.o;.obj;.bak;.exe;.pyc;.pyo;.DS_Store;.db'
"
"




Enjoy

Wednesday, November 17, 2010

Vim is Life

If you are a Windows guy, don't bother to keep reading because I am about to go on a rant about development on Linux / Mac. If you are interested, feel free, just a waring.

For those of you still not using VIM as your "IDE," for Ruby development shame on you! Before the TextMate and Emacs fan boys swoop down and kill me, give me a moment to explain my position.

My requirements for a development platform:
1: Free Isn't just good, it is required
In a world where I am dropping a grand or more on a laptop to keep up with the massive amounts of abuse I deal out, a development suite that is free is a must. TextMate users take note, I enjoy TextMate, but have no desier to pay.

2: Compatibility between systems
For the past few years, I have been on a team with mixed operating systems. Macs, Linux (Ubuntu, Fedora, CentOS, Minuix) all of them are in the mix. Switching between pairs for programming requires that we have similar setups, and there for compatible development tools.

3: Bloat is bad, and slow
Current generation IDEs such as RubyMine and NetBeans (with Ruby packages) consume a good chunk of memory and cpu time. This makes them more difficult to run on lower power systems such as VMs or systems pulling double or triple duty. In my case, as with most developers, my development machine is my server, my job box, my queue serer, my poller/ background operations machine, and of course all of the standard office apps one has to run. Allowing my IDE to suck up more then a gigabyte of memory just to load is insanity. Vim and all the plugins I have attached to it, runs in under 256mb. That is right, a quarter of the footprint.

4: Portability is God
The above point brings me to this one. Small is good, small is great, small is powerful. With a development stack that is 256mb, I can now live inside of Minuix. Why? Because now it fits. Load Minuix and Vim on to a USB, now you have a walkabout development environment that fits in your pocket. Going to the girlfriends for the night and need to get some work done? USB and go. No worries about her three year old windows box. Reboot off of the USB and rock some code.


5: Simplicity is power, power is simple
If I am running RubyMine, and my pair is running Emacs, it is almost impossible for us to work on each others systems. Emacs is great, but the level of customization and macros that Emac people seem to run prevent me from taking over the keyboard without spending half the time asking which key does what. This is not something that is worth wile in my opinion. A simple setup is nice. Most of us that have been developing for a while have been exposed to vim for many things. Standard keys for standard activities. All systems that run vim have the same setup. i for insert, escape for command mode, etc. Simple setups allow for simple interaction between pairs.

6: Extensibility
RubyMine comes with everything you need. Or so it claims. In my experience, attempting to get RubyMine to do everything I want it to do becomes an exercise in masochism. You want to run sql from inside of RubyMine? Have fun, you can do it, but its setup is a pain. Just add a MySql, Sqlite3, and an Oracle database to the list of connections and wait for RubyMine to choke itself out. Not fun. Vim has thousands of plugins ready to go that are under active development. I have found everything from typeahead completion, to yaml checkers. Everything you can want you can find. Even found a plugin to do prolog completion.

7: Customization should be optional
I know a few Emacs users that swear by their tool, as they should and more power to them. However, I have issues with a N=1 setup. Emacs seems to attract the type of users that love to customize the setup to the point of obscurity. The user might be blindingly fast at developing on language, but becomes unusable for anything but the intended target. I should not have to customize my development tool in order to use it. Out of the box operation of the tool is valuable to me. I should be able to hop on a brand new install of Nix and use the editor without wondering what happened to my massive list of macros. A simple one line .vimrc file is all you need, and truthfully, it is not even required.

Yea, I know. I am proballly wrong about most of this. But it is my blog, and I needed something to rant about today that wasn't work related.

Monday, November 15, 2010

Using Spawn with a thread / fork limit

Often when you are writing software to provide parallel operations in order to improve performance or to create a non-blocking section of long running code, one tends not to think about the impact of forking many threads. The reason for this in my experience is that you tend to not create more then a handful of threads. This can however, be dangerous. In the example below, we are assuming that there are a reasonable number of Funds in the system for the current context, say ten.


fund_processes = []
Fund.all.each do |fund|
 fund_processes[fund.id] = spawn do
    ... process start(fund) ...
  end
end

wait(fund_processes)


This will not cause any real issues as long as you wish to block to wait for these to finish processing, and that there are a reasonable number of Funds in the system. The issue comes when you do something stupid without thinking about it such as request a set of forks for all users in a 150,000 user system. That is right, you just created 150,000 processes attempting to use the same memory and CPU time, database, and IO as the rest of the system. A "think-o" of this magnitude can bring down your system. Trust me, I know. It is not a pretty site when you lock yourself out of your local job box because the system (8 cores, 12gb ram) does not have enough cycles free to respond to a ssh request. Time to wait and hope, not something you ever want to do with a production system.

To address this concern, I was directed to write up some code that would allow you to pass Spawn a limit on the number of processes to create at any one time. This extension of Spawn allows you to use the same arguments as Spawn itself, with one additional parameter, group_size. Here is the new code that can be found at my git repo.

each_in_parallel_groups_of creates blocks of processing groups with a limit on them. Truly strait forward. One thing to keep in mind when using this is that each group will wait for the longest running process in its group to finish before moving on to the next X items to process.

each_in_parallel_groups_of can be called on any Enumerable. This is a nice trick to call on say:

Funds.all.each_in_parallel_groups_of(5) do |fund|
... process_start(fund) ...
end


This makes it easier to make sure that you keep in mind the amount of resources that any one code block can absorb.

Here is the main code change in the Spawn fork. A request is in to get it pulled into mainline. I will keep you up to date on that.


#Spawns a limited number of threads / forks and waits for the entire group to finish
# accepts same spawn options as spawn
# Robert Meyer / Dean Radcliffe
def each_in_parallel_groups_of(group_size=10, spawn_opts={}, &block)
spawn_opts[:argv] ||= spawn_opts[:process_label] || "default_ruby_fork_process"
spawn_opts[:method] ||= :fork

raise LocalJumpError unless block_given?

self.each_slice(group_size) do |group_job|
fork_ids = [] # reset for each loop
group_job.compact.each_with_index do |item, index|

fork_ids[index] = spawn( spawn_opts ) do
block.call(item)
end
end

logger.info "Wating for #{Process.pid}" if defined? logger
wait(fork_ids)
logger.info "Done for #{Process.pid}" if defined? logger
end
end
<\code>


Tuesday, November 9, 2010

Using Spawn to circumvent Ruby OCI driver connection issues

  In our system we use Active MQ with a set of pollers to run background jobs such as importing large data sets and publishing information over the wire. For the most part this works great. You generate a message and publish it to the queue and forget about it. Normally everything rocks, except for the case where the runners (pollers) have not had any work for the past X hours, where X >= four hours. In this lovely case, Oracle and Ruby don't like each other any more. Ruby asks oracle for connection information and somewhere in the stack, there is a fifteen minute wait before both sides agree that the current connection to the database is no longer active.

  This sucks. Hard. We have tried many things in order to get Ruby to release the connection without checking with Oracle.
 
dbconfig = ActiveRecord::Base.remove_connection
ActiveRecord::Base.establish_connection(dbconfig)

and before that:

ActiveRecord::Base.establish_connection

and before that:

ActiveRecord::Base.verify_active_connections!

It was insane, everything we tried kept getting hung up in some magical part of the stack that didn't like the fact we were allowing for long running threads with no activity.

I finally found a solution that worked out for us. Amusingly it was while I was working on a section of the code for my own gratification.

Spawn. That's right, Spawn. Spawn is a simple, clean plug-in that helps to take the pain away from forking and threading in Ruby. Besides having a healthy number of fixes for threading issues in Ruby, it provides a strait forward way of creating child processes and waiting for them to complete. Of course there are a handful of options that you can specify but the base case syntax is simple:

spawn do 
   call_to_long_running_process_or_job
end


 That is it. It just works. If you want to wait for the process to complete before moving on:


fork_process = spawn do 
 call_to_long_running_process_or_job
end

wait(fork_process)

Makes life easy. It also provides a workaround for the Oracle time-out issue.

Old code:

def on_message(message)
    logger.info("#{Time.now.to_s} Received request: #{message}")
    ActiveRecord::Base.establish_connection
    #Sometimes our connection goes away when a poller has been waiting a long time for a job, this is the 15 minute hang line of code
    logger.info("#{Time.now.to_s} Finished reconnecting to the database.")
   do_something(message)
end


New code:
 def on_message(message)
    fork_process = spawn do
        do_something(message)
     end
     logger.info("Forked for processing Parent PID (#{Process.pid}) is wating for PID -- #{fork_process.handle}")
    wait(fork_process)
    logger.info("Completed message for Parent PID (#{Process.pid})")
end

No wait time for Oracle to release the connection or provide a new connection. The process gets the message out of the queue, forks itself, and runs it immediately. Makes the users happy, and provided me with enough ammo for a secondary post about threading with limits and lambdas.

Hope this helps someone out, and if not, there are a few blog links on the Spawn ReadMe that also helped me out.





Scott Persinger's blog post on how to use fork in rails for background processing. http://geekblog.vodpod.com/?p=26
Jonathon Rochkind's blog post on threading in rails.
http://bibwild.wordpress.com/2007/08/28/threading-in-rails/

Has and belongs to many through

While working on a legacy data conversion project I decided to create a Rails project in order to cheat so I didn't have to write a ton of SQL statements to dump things to flat files in order to operate on them. However, while doing this, I realized that the table names, and the entire model structure of the legacy application was jacked up hard.

This caused me to use has_and_belongs_to_many with the :through argument along with association_foreign_key and foreign_key. Let me tell you how stupid I felt when I realized I was looking at the wrong version docs. 

Brain needs to be on before turning to Google for the answer.  The code that does what I needed is below:

has_and_belongs_to_many :users, :join_table => "ib_users_accounts_link",
                                  :foreign_key => "accounts_id",
                                  :association_foreign_key => "user_id"

Friday, October 15, 2010

Meetings And Estimation

I found out today that high level estimation meetings for complicated, ill defined requirements, with a large group of people who each have different levels of understanding for all of the systems involved == long frustrating meetings with little or no ROI

Tuesday, October 5, 2010

Custom to_xml for ActiveRecord classes

In an attempt to product client definable API returns, Dean Radcliffe, Jake Scruggs and I decided that using the local files would be a good idea. I mentioned this in a previous post. It is a great idea, and the execution is pretty sweet as well.

I am currently fighting with a way to do the sub level arrays such as

Document has many Document tags

So a layout like
<doc>
<tags>
   <tag>Tag 1</tag>
   <tag>Tag 2 </tag>
<t/ags>
</doc>

where Tags are objects themselves

I got a solution working but it is course and not using Builder XML.

It uses <<--DOCXML DOCXML tags and internally hand hacked xml arguments.

Nasty...

  def to_xmls
    xml_class_fetch = self.class.to_s.downcase
    buff = <<-EOF
    <#{xml_class_fetch}>
    #{ClientConfig.get("xml_export.#{xml_class_fetch}").map do | xml_field_name, method_call |
    case method_call
    when Symbol
      val = self.send(method_call)
    when String
      val = self.instance_eval(method_call)
    end

    if val.is_a?(Array)
      val = val.map do |element|
        if element.respond_to?(:name)
          element_call = element.name
        elsif element.respond_to?(:identifier)
          element_call = element.identifier
        end
        "<#{xml_field_name.singularize}>#{element_call}</#{xml_field_name.singularize}>"
      end
    end

    "<#{xml_field_name}>#{val}</#{xml_field_name}>"
    end.join("\n")}
    </#{xml_class_fetch}>
    EOF
  end


where the feed looks like

  xml_export:
    document:
      # TODO - may add XML element names etc ...
      - ["other-id", :other_id]
      - ['created-at', :created_at]
      - ['tags', "tags.to_a"]



As one can imagine, I am not happy with this solution at all.

If any one has any better suggestions to return valid XML with nested elements that are part of the model's has_and_belongs_to_many as well as other types of associations, please feel free to shoot me a tweet or comment.

Friday, October 1, 2010

Rails Join Tables

Wow,

Bad morning. Took me 30 min to find out that the reason I was throwing an error on a controler was not due to bad code, but due to a bad migration that I wrote.

When writing the join table migration I neglected to specify that there was no primary key. Oracle HATES this.

  def self.up
    drop_table :email_blasts_users
    create_table :email_blasts_users, :id => false do |j|
      j.references :email_blast
      j.references :user
    end
  end

Make sure to have that :id => false if you don't want your database to yell at you.

Wednesday, September 29, 2010

Requirement / RPM Hell

 For those of us that have been around long enough on Red Hat distros, we remember RPM Hell. A cyclic series of RPMs that depended on each other, thus RPM Hell. Trying to find the one link that would let you install the rest of the packages was often a exercise in frustration that made you think that your career path should have been in the area of Great White shark research and not software development. Our upcoming retrospective, and this XKCD made me think of RPM Hell.

There is a bit of back story require in order for the rest of this to make any kind of sense.....

Our infrastructure and staff have gotten much better over the course of the last year. When I first started on this team, it was as a solo developer attempting to transform a questionable architecture into something resembling sanely designed system.  I had some backup and support from the team's lead, Tim Galeckas <@timgaleckas>, and some direction in the most general sense from the companies CTO, but not much more than that. It was one of those projects where the general gist is "Fix it" where "Fix it" is about all the direction you can expect. As one can imagine, the project was a huge time sink with minimal return.

A few months after that project was terminated, the new effort to rebuild the system in RoR got into full swing. We still had issues of course, nothing phoenixing from a process that severely broken can be without defects. As these issues became apparent, changes came to my little world.  

The view in the team is now so divergent from the original it is hard to see how one originated from the other. We have a project manager, a user interface designer, four developers, two quality assurance members, and a true to life development manager. There is even a process in place to score, scope, and select stories <tickets, cards, issues> for development in any iteration. All that being said, there are still problems.

Our primary issues, in my opinion, is not the lack of staff, the backlog of stories, the intra-team interaction, or quality assurance backlogs. It is the quality of requirements that the development team receives. I have been converted from the waterfall style development environment that I first worked in to an agile approach, so complaining about story requirements might seem a bit odd. I do not enjoy a feature request that leaves nothing interesting for me to do. When presented with a story that tells me how to technically solve the issue, visually present the results, and what should be tested I sigh. These stories are what turns us into code monkeys and not professionals. This is a fine line to tread, I want enough information so when I deliver a piece of functionality it is complete, does what is supposed to, looks good, and is performant.  What I see as our RPM Hell in my development space is stories that look complete, but have hidden requirements or functionality that is only available if you tap the brain of the primary stakeholder.

My current example is with a story about displaying information from a third party system and how it weaves into the current system. The conversation and stories development went something like this.

End consumers of InvestorBridge what to be able to view fund level information.

Great, no problem. What does that mean? 

Well the system that we pull information from has a huge data set of fund level information. 

 Okay, we can already interface with that system so what information do you want? 


It is on the story.

That is something I love to hear. To me that means that I can load up the story, read it, and understand what they want. This is almost what happened. Almost. The story told of things like fund level returns and AUMs (Assets Under Management)  and the display of these was to follow current displays of account level information of the same type. Outstanding, easy to do, easy to validate, easy to import.

Where everything fell apart was that there happened to be a document attached to the story that contained additional requirements. That is right, the story had an attachment that modified the context of development. In our process attachments are most often images of expected display or test data. I have never before seen attachments as additional requirements. Requirements should be in plain text on the story. This is the standard and what the developers expect. Where this devolved from a process to a cyclic definition followed by more questions is when the list of additional fields broke the current display model. This brought in our UI guy as well as the project manager. Scope creep was inevitable at this point. New views, click paths, and imports were all required after this document was rediscovered.

This brought to light two things
  1. We, as a team, failed to understand the story and the feature requested
  2. We, as both a unit and individuals, failed to ask questions that would have mad this apparent
 This felt like RPMs all over again. If I would have known what RPM was the keystone RPM, I would have been able to easily install the software and understand its dependencies. In much the same way, if I would have known what questions to ask, I would have been able to see the full scope of the story and its underlying implications.

So, how do you know what questions to ask when you don't know that you don't what you don't know? Where to start? Where is my yum for feature requests?

ActiveMessaging and Rails 3

For those of us upgrading our applications to Rails 3 there is something nasty to watch out for if you are also using ActiveMessaging. The Google Active Messaging group covers some of the issues that have been found.

 Make sure to take a look at Spraints' fork.

http://github.com/spraints/activemessaging

Tuesday, September 21, 2010

SOAP, REST and XML Violence

A few of our client have requested an API to send data and verify uploaded data. This, in general, is pretty standard. As you grow you will run into clients that are more technically savvy then the others or have larger data sets with more frequent updates. If you are uploading three-hundred documents a week to a given web site by hand, well you begin to look for a better way to do things. Sometimes, and only sometimes, APIs are the way to go.


In the world, there is a huge argument over SOAP vs REST and which is the superior API. I am by no means the authority on the matter but it seems to me that there are different use cases for both of them. Now that my opinion is out in the open, I am going to clarify that position:


I HATE SOAP. Every company that I have been forced to use SOAP at drove me insane. Insane I tell you! The amount of overhead that is required to work with SOAP makes me mad. The other part of this is that during the development cycle, the WSDL kept changing without a revision number. The providers I was working with deemed that since the service was still in a development stage, it was not a requirement to version the WSDL. Every time the system was almost complete and I had conformed to all of the SOAP contracts, the WSDL changed out from under me. This might be the cause of some bias on my part....


In the same breath I am going to defend SOAP for having a contract, something that REST lacks in the formal sense. A decent REST resource can be found here.


REST is great for me as a developer in a shop that needs to have a decent velocity with a small head count. REST snaps right over the top of my already established controllers and with a few modifications to the ActiveRecord models I can customize the output of the .xml request.  Dean Radcliffe, one of the other developers has convinced me that storing client configuration for xml output in the I18n files is not a bad idea.

I found a blog who's opinions on API I tend to agree with.
Exert from REST and SOAP: When Should I Use

...Areas that REST works really well for are:
  • Limited bandwidth and resources; remember the return structure is really in any format (developer defined). Plus, any browser can be used because the REST approach uses the standard GET, PUT, POST, and DELETE verbs. Again, remember that REST can also use the XMLHttpRequest object that most modern browsers support today, which adds an extra bonus of AJAX.
  • Totally stateless operations; if an operation needs to be continued, then REST is not the best approach and SOAP may fit it better. However, if you need stateless CRUD (Create, Read, Update, and Delete) operations, then REST is it.
  • Caching situations; if the information can be cached because of the totally stateless operation of the REST approach, this is perfect.

....If you have the following then SOAP is a great solution:
  • Asynchronous processing and invocation; if your application needs a guaranteed level of reliability and security then SOAP 1.2 offers additional standards to ensure this type of operation. Things like WSRM – WS-Reliable Messaging.
  • Formal contracts; if both sides (provider and consumer) have to agree on the exchange format then SOAP 1.2 gives the rigid specifications for this type of interaction.
  • Stateful operations; if the application needs contextual information and conversational state management then SOAP 1.2 has the additional specification in the WS* structure to support those things (Security, Transactions, Coordination, etc). Comparatively, the REST approach would make the developers build this custom plumbing.
Mike Rozlog also makes a good point about XML. XML can be heavy, very heavy if you are transmiting a ton of data over the wire that is very verbose. Mo
on Stackoverflow makes a pointed joke at the cost of verbose XML.
"XML is like violence - If it doesn't solve your problem, you're not using enough of it."

The current application that I am working on has both APIs I am sad to report. One supports the legacy system that feeds it documents and the REST is now being provided to the clients as the API of choice for interactions on a programmatic level. I am happy to say that this is not as horrid as it sounds. The legacy SOAP code handles all kinds of stupid requests and statuses that are not needed by anyone or anything but an ill-conceived piece of stateless Java. As we trasition our clients off of that legacy system, we will be able to DRY up the API controllers, and in this case, remove the API controller entirely as it will have been relpaced by a Restful API.



I will stop ranting now and state, SOAP and REST both have their place in this world, but given the chance I would rather work with REST as a developer. Flicker and Twitter have great examples of REST working well.

--Rob

New Relic

I am attempting to add New Relic instrumentation to ActiveMQ powered poller / processors. Any help or insight that anyone has would be valuable. 

New Relic and Active MQ

Monday, September 20, 2010

Ruby on Rails And Soap Hell

To Whomever decided that WSDLs and SOAP should be the Enterprise communication standard,

I should murder you, slowly.....

--Robert R. Meyer


Really? Really!? WSDLs? Come on man. Your killing me here.

One of the WSDLs that I have to support, which we actually inherited from another product, is full of duplications and includes three calls to the same underlying function with optional parameters the only distinguishing feature to separate each of the calls.

The WSDL is 640 lines for a few methods. This strikes me as insane. The best part is that there is nothing DRY about WSDLs. They are by nature the most disgusting blend of XML and .... well something more disgusting... like badly written sudo OO PHP?

The primary issue I have with SOAP is that so many of our consumers use SOAP as their API of choice. I hate this, the definition language, while verbose, is inelegant. There is far too much bolier plate code required to even begin to use the API. Entire commercial solutions exist in order to elieviate this issue. In my experience and opinion, this usually denotes a problem.

Solutions like SoapUI and a few others are designed to generate the required boiler plate code to start using SOAP. Anyone looking at the generated code should realize that two thousand lines of java should not be required to request a list from your data provider. REST is much better in this regard.

One of our primary product's WSDL definition requires 271 lines of java just to use the document upload call. Bit exessive? I would say so.

After spending around four hours working on creating a new, simpler WSDL to expose document upload and meta-tagging, it is time to go home. All I can see is indented xml fragments with custom namespaces defined wherever the origonal creater determined it was best.

Headache? Check.
Code Blind? Check.
Mentally Drained? Check.

Time to go get a beer.

Selenium and Cucumber testing

 I got talked into a Tech Talk for our internal conference, I decided if I have to do it, I am going to do it on something that is useful to my team. Thus, cucumber and Selenium. I plan on covering the following points..
  • Why cucumber
  • Selenium and ruby / java
  • front end verification
  • good / bad practices with cucumber
  • Transactional Issues
  • ID Problems
More to come on this later

    Rails Conf

    Found out this morning that my buddy Jake Scruggs got one of his talks accepted to Ruby Conf. Check out his project: metric_fu

    Wednesday, September 15, 2010

    VPD and Oracle Scheduled Jobs

    A few months ago we had a  minor problem, the companies' development Oracle server died. It did something fun and dropped a partition or two and generally went belly up. All things considered, not a major issue. The death of Oracle caused us some lost time mostly due to the fact that most of us were not running Oracle Express which would have allowed us to keep on developing even with a down Oracle cluster. The quick lesson here, run Oracle Express if your box will handle the load.

    One fallout of this problem, besides the lost time was that we also lost our primary "Gold Schema." This was a big problem.... The birth of our system was a bastard child conceived by one of our Senior Developers as part of a bet. The conversation went something like this...

    S: I really think the attempt to upgrade the PHP application to a newer version is a waste of time.

    A: I think it is better then the alternative of starting over on a new platform.
    S: I bet I can get a working version of the system on Rails in a week!

    A: So do it.

    As any of you that have had this kind of conversation know, this was a bad idea on both sides. It lead to the creation of the new system, which was supposed to be a proof of concept, in under a week. Given it was a 60-80 hour week but all the same, code that is rushed like that takes on a ton of technical debt and tends to inherit the legacy system's debt as well. The debt I am currently talking about is that the Rails application's database was a copy of a MySQL database ported over into Oracle on short notice. Using this tactic, we do not have migrations from blank to current state. Not usually an issue with Oracle considering our in-house systems allow us to request a clone of a current schema, but in this case, with the loss of the Oracle schema, we lost our baseline.

    Recovering the baseline from one of the developers that was using Oracle Express was straightforward, but we forgot one thing when we made that developers' schema the master. We forgot about VPD. For those of you that don't know what VPD is, be thankful. It is Oracle's home grown security system affectionately called Virtual Private Database. VPD has good uses, and ones that can be transparent to the developer. Things like table level security and filtering as well as constructing database sessions with additional audit information. A few examples can be found here. One it is not good at is row level security, which is what we were using it for.

    Jobs are another part of the Oracle schema that we failed to remember at first. Our Database Developer informed me that there are at least three ways you can schedule jobs. We managed to not clear out all of these when we removed VPD from our security model. One of the jobs that was scheduled every fifteen minutes constructed a materialized view of what user was allowed to see what document based on a series of permission levels and tags. Generally you can imagine that this job would not cause any issues, it was just reading from around four tables and constructing a new view. All good. But in this case, things were not as nice as they should have been, you see, there was a bug.

    The bug caused this job to lock all of the rows that it was using to construct the view in an attempt to verify that the view it would construct was accurate. Oracle locking somewhere in the range of half a million rows across four tables causes things like deadlocks. At least the deadlock provided a trace showing what had locks on the rows. As soon as we found this, we went through the new "gold" schema and blew away all remaining vestigial VPD operations.

    Lessons to be learned:
    1. Make sure your migrations allow you to build a new database from scratch
    2. Verify that migrations remove tasks / jobs / views that are no longer needed and could impact the performance of your system
    3. Do not use a bet as a good reason to create a new production system
    4. Attempt to learn from the last generation of software's sins
    5. Be friends with your DBA and DBD, they can save your ass.

    --Just because you have a hammer does not make it the right tool for the job

      Monday, September 13, 2010

      Windy City Rails

      Few of us here at Backstop attended Windy City Rails  this weekend.  It was a good time with a bunch of good speakers including  Jake Scruggs covering a ton of topics and providing a massive amount of info.

      During the course of the day WCR had a little project written in Rails 3. We, as a collective coded the Dojo Chat Server. Intresting little toy considering it was written in under 8 hours by a group of people with varying levels of experence and decidation to the project.

      Just thought it was an interesting experience.

      Rake Tasks Calling Rake Tasks

      I am currently working on an export / import to move data from a legacy system backed by MySQL to the new generation system backed by Oracle. Due to the fact that both our DBA (Database Administrator) and DBD (Database Developer) are overloaded,I have been tasked to create something that ports the data over. To do this I created a Ruby project attached to the MySQL server, and pumped out the data in a pipe (|) delimited format with headers.

      This rocks for my uses because the main project uses the FasterCSV gem. So using a set of rake tasks I can export the data from the old system, then execute the import tasks to spew the data over to the Oracle backed system.

      I was looking around for an easy way to call many rake tasks in much the same way as Capistrano. It turns out that it is drop dead easy.

      Check out Calling rake tasks from another rake for details.