Thursday, July 14, 2011

old notes: FCP development opportunities?

This was written a long time ago, serving as notes to myself for development opportunities outside of my usual sphere. Now that FCX is out these points are moot.

Summer 2010 attended DMA  Final Cut Studio integration class. Expected workflow and FCP server training. Didn't really get to much of that. Excellent. Mark Spencer, Motion expert, instructed; we hammered a lot of Motion projects out. Totally inspiring.

things which stood out:

 FCP/STPro: when you send from Final Cut Pro to Soundtrack Pro an entire sequence is sent. In and Out points in a given sequence are ignored. OPPORTUNITY: possible to build a FCP plugin which builds a new sub-sequence from the In and Out points and sends that? or to build a new stand-alone sequence and send that?

STPro: in class we did an exercise where we removed a chirping bird. this was done by 1) switching to spectral scaling, and then using the Spectrum vew HUD to dial in on a frequency range. we then used a bounding box to select the bird chirps and delete them. OPPORTUNITY: have a pencil tool which reduces the energy level / DB as one paints over the range in teh Spectrum view. Sort of like the older sound/frequency painting tools, or that crazy audio painting tool MetaSynth.

Preference files: STPro effect references are stored in .pst files. the finder lists these as logic pro files. it would be nice if there were a guided text editor for these.

FCP: when applying audio level fades to multiple clips in a track ( select forward tool ) ( once it's the "favorite" ) the scale filter will apply without taking into account the length of the clips. my notes show that there may be an opportunity in analyzing the ramp-in and out  of a crossfade and keeping the slope, altho right now I don't know what that entails. If that means keeping the slope then the time can't change. Or does that mean looking at the dB and scaling to match?

OS: Kenneth would like a file browser/view which will browse audio and video by durations of their content. used for picking content.

FCP / Compressor: Since scripts can be run one @ completion could make a mess of simple Growl scripts to provide notification within a server pool. heck most anything, tweets, macspeech

COLOR: there is a "Still Store" in color. Wouldn't it be cool to make a HTML document which contained JPEGs of the StillStore for 'approval' in a workflow? or at least cataloging peeps wld be crazy to approve tweaked video from  jpegs.

DVDSPro: transitions? apparently there are not a lot of transitions for menu selections. At least there could be some crazy ones. this would be quartz?

Wednesday, December 1, 2010

a branch of SlingRuby supporting remote integration ntesting

So I branched my stuff and started in by removing localhost hardcoding.

here is the multi-session worklog:

  •  add 2 new attributes, host and port.
  •  on init parse the 'server' argument extracting host and port
  •  add accessors for host and port.
  •  these changes will make it easier to port the older scripts which hardcode a localhost value.
kern-1105.rb - tried not to change the style of this test.
  •  replace hardcoded constants with framework server instance's accessors
  •  get adminuser auth from Slingusers::User.admin_user hmm may want to do this across the board in a later sweep
kern-568.rb - this one is of interest due to the chance that the timezone of the target host may be different than that of the testing box. In order to test the timestamps across timezones I modified the tests to parse the timestamps into a common, comparable format. Tested by poking against the Indiana QA server, a far different place than Belmont California.
  • Time.parse() introduced
kern-577.rb - This test's localhost issue is in the teardown. no biggie.
Wasn't able to run consistently against the Indiana QA server. Runs fine against localhost. 3akai is having proxy errors just now.
 I am finding is that kern-577 cannot consistently create a user at Indiana, and when it does it cannot consistently create a link to the user's test-uploaded document. ( gets a 500! ) It does sometimes pass, but it sure is flakey. ( return a day or two later) OK now. To be expected in a fluid environment.
  • use Sling.url_for()
kern-797.rb - another teardown tweak
  • use Sling.url_for()
kern-854.rb - just a couple of lines, but this fails on tear-down. I'll revert to be sure that it works without changes. yep, when getWidgetServiceConfiguration is called against remote hosts ( or Indiana ) it must not return whatever is expected. Lance's page shows that 854 fails when run locally too. OK I'll move on for now :)
  • Sling.url_for() in WidgetServices methods
kern-857.rb - this is simular to 854. works OOTB against localhost, but fails against remotehost due to hardcoding localhost URL The test has a utility method 'getWidgetServiceConfiguration' which is, well, getting the service configuration. This too currently fails on the QA server, so I'll just plug along with my code localhost changes...
(later) fixed this and 854 by changing the xWidgetServicesConfiguration foo methods to use the Sling classes url_for() method. The code was putting in double slashes, which were failing.
  • Sling.url_for() in WidgetServices method
kern-891.rb - is much like kern-577 a teardown use of localhost. swapped in the url_for method :)
  • Sling.url_for()

That's it for the kern's. now for the tests. The first on my list

  is a false positive. 

 had a few hardcoded localhost URLs which I modified to use the Sling service's setup

 This test had failed because I don't have mailtrap installed in my ruby repository of gem wonder. This test runs mailtrap on localhost, so changing it to something else doesn't make sense. Now to go see what mailtrap is. yeppers. makes no sense for a remote host test. sweet.

Next Steps

Roll this stuff into my branch, then update my checkout against the current master to see what I've missed.

modify tools/runalltests.rb to contain a 'remote' blacklist. Not sure how it could tell now :) I'll take a big swing, add a "remote" flag, and have runalltests.rb do the sed stuff I posted about earlier.

use one definition for admin user. this is just clean up, but it'll make it easier for folks who change their default admin user down the road.

modify / remove timestamp based userID generation to use new User service method using random numbers. ( check speed )
extend the users module thing to centralize creation of temporary users. doing this will reduce ID collisions in multi-load/thread testing

Tuesday, November 23, 2010

running Sakai3 Ruby tests against remote hosts

The Sakai 3 checkout comes with a nice set of Ruby test coverage. Reviewing tests is a good way to learn the landscape of a given system, which is why I'm poking around with this Ruby and Selenium stuff.

All the Ruby tests can be run via a utility script: tools/runalltests.rb

The tests run on localhost.

If you want to run the existing tests against a remote host you can manually modify the Ruby testing setup script to point at your target host. You'll get some errors, but later for that ;)

Or you can use this script to modify the testing framework, temporarily pointing it at an arbitrary host and then run all the tests. When it's done it returns the framework to the original state.

# edit test.rb to point to a remote server
# and then via tools/runalltests.rb run all tests
# found in:
#   SlingRuby/kerns
#   SlingRuby/tests

# make sure we have an argument

if [ $# -ne 1 ]
 echo "Usage: $0 TESTHOST_URL"
 exit 1


# some variables
# NAK_HOME needs to be set to your checkout/workarea

# the current runalltests script in NAK_HOME/tools expects to be run
#     from the SlingRuby directory

pushd $TEST_HOME

cp ./lib/sling/test.rb ./lib/sling/test.rb.bak

# in-place editing fun
sed -i ''  "s!Sling\.new()!Sling\.new('${TARGET_SERVER}')!" ./lib/sling/test.rb

# stash the result away for later inspection, or not.
cp ./lib/sling/test.rb /tmp/test.rb.EDIT

# run all the tests in TEST_HOME/kerns and TEST_HOME/tests
#  except now the test setup will point at your TARGET_SERVER


# repair changes to the testing framework
mv -f ./lib/sling/test.rb.bak ./lib/sling/test.rb

# go back to where ever you were

Remote targeting can be useful for quick A/B comparisons.

I suppose this retargeting idea could be built into the current runalltests.rb script.
What I didn't see was a way to easily mongle server retargeting into the current test framework.

You will experience errors.
Tests were not written with this kind of use in mind, so you will get funky race conditions and conflicting states. Also some of the tests are hardcoded to use localhost, so they'll have trouble.

If the various ID generators were slightly tweaked to generate more unique IDs, and the use of localhost removed, these tests would run against arbitrary hosts.

You could use this technique to crowdsource load against a common target server. You and your evil pals would fire up a flock of shells on your local machines and point the test suite at the target server.

Localhost tests:


OK time for me to cook a stack of gluten free pancakes for my crew!

Saturday, November 20, 2010

it is not alive.

After all that exploratory dorking around I've created a tiny patch for sling.rb, in sakai3's testscripts/RubySling/lib/sling area, which adds an "is_Ready()" method.

The method returns false if
  1. there is no connection to the desired server
  2. if all of the sakai-nakamura bundles are not up and Active
which should be sufficient for helping automate the Ruby tests in a CI server context.

once you have that mod made you can cook up a little tester like this
#!/usr/bin/env ruby
# test to see if sakai 3 is up and all
#  sakai bundles are loaded

# assume we're running from 3AKHOME/testscripts/SlingRuby

$LOAD_PATH << './lib'

require 'sling/sling'
include SlingInterface

@s =
# at this point we are logged in, yes as admin
# ( it's arguable that the constructor should bailfail if S3 isn't up. )

true == @s.is_Ready() ? exit(0) : exit(1)

and use it in your testing shell scripts so they won't run if the server isn't up or isn't accessible.

One thing that struck me during all this was that if the RubySling tests could be run by a group of people against a common remote host you would have a quick way to produce additional 'load' during a bug bash. Granted it would be just-less-than-senseless load. Suitably empowered participants would just fire up the tests in a mess of shells on stray machines and let them mumble around in the background while doing ad-hoc bashing.

Looks like just a little tweaking to the tests and their setup. hmm.

Wednesday, November 17, 2010

is it alive II

I need a hammer for pulling bundle state info as Sakai3 comes up:
. ~/.resty/resty
resty http://santoku.local:8080
for  namei in santoku_three_{1..20}
 echo "getting $namei at one second intervals"
 GET /system/console/bundles/.json -u admin:admin | pp > /tmp/lifecycle$namei &
 sleep 1
then I examined the files 'by hand' for differences. This black box approach is good enough to learn how the state changes. ( of course comments are welcome! )

So on my test box sakai3 comes up and is fully alive in about 12-odd seconds.

The bundle states go from Installed to Resolved to Active.

So testing scripts can start up once all the bundles are Active.

Starting in with the Ruby scripts found in testscripts/SlingRuby I follow the README.txt's instructions for updating my build of Ruby. To get the top-level "is your environment ready" scripts to run I have to modify them a bit. They didn't load the supplied ./lib utilities so I added

$LOAD_PATH << './lib'

This may be due to some kind of Rubistia envronment setting I'm missing. The "is your environment ready" scripts were testing localhost, which I'm not, so I modified the line that looks like it creates a new Sling bot:

@s ="http://santoku.local:8080/")

and blammo:
bash-3.2$ ./create-user.rb testuser
User: testuser (pass: testuser)

As the main body of the Ruby test scripts are run quite often I wonder what the heck is going on here. 

IRC trip to #sakai to ask whether or not this load path stumble I had is due to my environment missing some settings ruby practitioners commonly use. It looks like these little scripts were just left out of a large sweep which reworked issues like this for the main body of Ruby tests.

It would be better if the README.txt and the "is your environment ready" scripts just worked OOTB, esp as more newbs like me come on-line, but that's a little thing in the pace of the project. I'll file a JIRA for the library path problems.

These little scripts are enough for me to see how to plug in the "is it alive" bit. Onward!

is it alive?

Watched Mel Brook's Young Frankenstein with my kiddo over the weekend: "It's ALIVE!" is now our catchphrase for, well, you know whenever a 12 year old boy wants to yell something.

How do I know that my Sakai3 has come up and is ready for being poked with a sharp stick? In IRC I chatted for a moment with stuartf about this, and eventually wrote up KERN-1868 to put that sharp stick in the sand.

With the Sakai deployments at Stanford I cooked up a mess of lightweight web tests which could be resolved from the command line. These exercised various points of the service stack culminating in heartbeat test: IT'S ALIVE! (Stanford's Julian Morley has taken these tests and greatly improved them, weaving them through the load balancer's tests.)

Personally being a blunt instrument I'm going to get the JSON used in the Sling Web Console console/bundles report, parse it for Sakai related bundles ( somehow ) status things, and provide a FAIL of some sort if all the Sakai bundles are not up. I think it's clear that this is rather blunt - it will undoubtedly come to pass that some Sakai categorized bundles are in development, or are being A/B'ed and so won't be active, but this is an OK place to start. I'll presume that the Categorization is sufficient for now.

Speaking of starting I am a lazy programmer so I'm going to not get all mightly curly as I explore this approach, but start off by using resty to make my requests.

Resty is a curl wrapper which exposes a set of commands into your shell, allowing you to REST at ease. Here's an example:

 $ . resty
 $ resty
 $ GET /system/console/bundles/.json --basic -u"admin:admin"
{"status":"Bundle information: 127 bundles in total - all 127 bundles active.","s":[127,126,1,0,0],"data":[{"id":0,"name":"System Bundle","fragment":false,"stateRaw":32,"state":"Active","version":"2.0.4","symbolicName":"org.apache.felix.framework","category":""},{"id":81,"name":"Apache Aries JMX API","fragment":false,"stateRaw":32,"state":"Active","version":"0.1.0.incubating","symbolicName":"org.apache.aries.jmx.api","category":""},{"id":85,"name":"Apache Aries JMX Core","fragment":false,"stateRaw":32,"state":"Active","version":"0.1.0.incubating","symbolicName":"org.apache.aries.jmx.core","category":""},{"id":68,"name":"Apache Commons IO Bundle","fragment":false,"stateRaw":32,"state":"Active","version":"1.4","symbolicName":"","category":""},{"id":18,"name":"Apache Derby 10.5","fragment":false,"stateRaw":32,"state":"Active","version":"10.5.3000000.802917","symbolicName":"derby","category":""},{"id":111,"name":"Apache Felix Bundle Repository","fragment":false,"stateRaw":32,"state":"Active","version":"1.6.4","symbolicName":"org.apache.felix.bundlerepository","category":""},{"id":45,"name":"Apache Felix Configuration Admin Service","fragment":false,"stateRaw":32,"state":"Active","version":"1.2.4","symbolicName":"org.apache.felix.configadmin","category":"osgi"},{"id":47,"name":"Apache Felix Declarative Services","fragment":false,"stateRaw":32,"state":"Active","version":"1.6.0","symbolicName":"org.apache.felix.scr","category":""},
it has a buddy, pp, which is a perl pretty-printer one liner.

$ curl > /usr/local/bin/pp
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0    88    0    88    0     0     83      0 --:--:--  0:00:01 --:--:--   423
$ more /usr/local/bin/pp
perl -0007 -MJSON -ne'print to_json(from_json($_, {allow_nonref=>1}),{pretty=>1})."\n"'
$ chmod +x /usr/local/bin/pp
which makes things a bit easier for me to read:
Octo:~ casey$ GET /system/console/bundles/.json --basic -u"admin:admin" | pp
   "status" : "Bundle information: 127 bundles in total - all 127 bundles active.",
   "data" : [
         "stateRaw" : 32,
         "version" : "2.0.4",
         "fragment" : false,
         "name" : "System Bundle",
         "symbolicName" : "org.apache.felix.framework",
         "state" : "Active",
         "id" : 0,
         "category" : ""
         "stateRaw" : 32,
         "version" : "0.1.0.incubating",
         "fragment" : false,
         "name" : "Apache Aries JMX API",
         "symbolicName" : "org.apache.aries.jmx.api",
         "state" : "Active",
         "id" : 81,
         "category" : ""
What I can now do is GET and PUTS from my command line, with the shell preserving connection info and mongling up the real curl call.

This means I'm ready to rapidly poke at the Console's Bundles page and see what's happily loaded. I'll have to do a lifecycle test to see just how the states change as Sling comes up, and probably have to break something to see what a broken Sakai 3 looks like. Perhaps a stubbly broken bundle. hmm.

Onward! From what I've seen most of the Sakai test framework is harnessed up in Ruby, so to make this quickly useful I'll learn Ruby's JSON parsing next. (I've also been looking into jsawk, which is pretty neat.)

Then back to the 3elenium Grid track.

Resources: and be sure to watch the movie!