Intel Xeon Phi for “cheap”

Posted on 24 March, 2016 in tech by

(This work and post were originally from early 2015, some aspects may still be useful, eg the kernel patch for the lower end motherboards)

Recently Intel has been selling their a version of their Xeon Phi coprocessor under a promotional deal at 90% off.  This means that one can get a device with 8GB of ram (on the coprocessor) and 228 hardware threads (57 physical cores, and each with 4 hyper-threads) at a reasonable price of ~$200.

When I first purchased the Phi, I was planning to put it into somewhat of an old desktop system that I had lying around, however the motherboard did not support the major requirement of “Above 4G decoding” on the PCI bus.  4G decoding deals with how the system allocates the memory resources on items on the PCI bus.  With the Intel Phi, unlike consumer level GPUs it will present all 8G as a memory mapped region to the host computer.  (more about 4G decoding)   Based off some research on this obscured feature, it appeared that most “modern” motherboard have some support for this feature.  I decided to get an Asus h97m-plus which is fairly cheap, and fit the computer tower that I already had on hand.  While this motherboard does list the above 4G decoding in its bios and manual, I am not actually sure if this feature has been properly tested, as unlike Asus higher end motherboards, there was no mention of this mother board specifically working with the above 4G decoding.  Based off examining the early booting sequence it appeared that the Linux kernel was attempt to find alignment positions for PCI devices which were equal in size to the requested memory region (8GB in this case) or depends on the BIOS to perform the PCI allocation before booting.  For the higher end motherboards which the Intel Phi was known to work with, it appears that the “more powerful BIOSes” were allocating memory for the Phi, but in the case of this lower end motherboard, the BIOS was unable to deal with a request to allocate 8GB of memory and thus falling back onto the kernel to perform allocations.  Following this observation, I made a small kernel patch (here) which changes requests for alignment larger than the maximal size to be simply aligned at the maximal supported size.  With the  components in this computer it appears that even with this change the Intel Phi gets aligned to a 4GB boundary and is able to still function correctly.

The next challenge once the Phi was communicating with the computer was to prevent the chip from overheating.  The discounted versions of the Phi did not include any fans as it was designed for use in server environments.  Additionally being a 300+W accelerator, the system is capable of generating a lot of heat.  As such, many “typical” fan solutions that I tried failed to keep the chip cool for longer than a few minutes.  I eventually landed on the high-powered tornado fan which can move over 80 cubic inches of air a minute.  I ended up having to zip tie this over one end of the chip to ensure that there was enough directed airflow to keep it functional.  (warning to future users: This fan actually does sound like a tornado, constantly).

Having the entire system functional for over a year now, I have managed to use the Phi for a handful of computations.  While there is decent opportunity in improved performance, this chip really requires that you design customized software for it specifically.  This is especially true given that Intel Phi is less popular than graphics cards with Cuda, where many mathematical tools and frameworks already have customized backend targeting Cuda requiring limited effort on the user’s part.  While this chip has a nice promise of being able to execute normal x86 instructions, this seems to be of fairly limited use since the only compiler that will target the chip and use its specialized vector instructions is Intel’s own compiler (similar in nature to Cuda).  This makes it fairly difficult to natively run any non trivial programs on this chip as any external libraries require their own porting effort.  (As an accelerator which accelerates embedded methods similar to Cuda this chip works fine, just if you are trying to run a program without the hosts involvement.)



Photos of the setup:

Estimating the n percentile of a set

Posted on 27 October, 2014 in Algorithms by

Here is an interesting idea that I had recently, this is just a high level concept about how it would work, there are no proofs for error bounds or quality, in fact there would be a handful of orderings of sets which would product terrible results.


To accurately compute the $latex n^{th}$ percentile value of a given set of values, one ends up having to sort the values which if they are not integers, can take $latex O(n \log n)$ time.  However, then getting value itself is trivial since it is simply going to the correct place in the sorted values.  I am thinking that one should be able to compute an estimate for the $latex n^{th}$ value for a randomly ordered set of elements in $latex O(n)$ time.

First the basic idea, imagine that you have a set of elements $latex X = \{ x_i \}$.  Then if we had this set sorted $latex S$, then finding the $latex n^{th} (0 \le n \le 1)$ percentile out of $latex X$ would simply become $latex s_{n * |S|}$.  This implies that we have $latex n * |S|$ elements less than $latex s_{n *|S|}$ and $latex |S|*(1 – n)$ elements greater.  From this point we can imagine constructing two sets, $latex \alpha = \{ x \in X : x < s_{n * |S| } \}, \beta = \{ x \in X : x > s_{n * |S|} \}$ which represent the elements greater and less then the $latex n^{th}$ value.  This also means that $latex \frac{|\alpha|}{|\alpha| + |\beta|} \approx n$.  Now using this concept for $latex \alpha$ and $latex \beta$, we can attempt to construct these sets while iterating through $latex X$ by having a current estimate for the value $latex s$, and then tracking the elements current in each set.  This essentially becomes if $latex \frac{|\alpha|}{|\alpha| + |\beta|} > n + \epsilon$ then take the current value of $latex s$ and insert it into $latex \beta$, then take the largest element out of $latex \alpha$ and set it equal to $latex s$.  In the case of $latex \frac{|\alpha|}{|\alpha| + |\beta|} < n – \epsilon$ we simply do the reverse by inserting the current value of $latex s$ into $latex \alpha$, and then removing the smallest value from $latex \beta$ and setting it equal to $latex s$.

Now the problem has been reduce to splitting $latex X$ into two different sets and keeping them sorted somehow to be able to get and remove the largest/smallest elements.  However, this would give an exact answer to finding the $latex n^{th}$ percentile.  Now given that we want to find an estimate, we can imagine capping the size of these sets to $latex k$, where $latex k$ is a small number such as $latex 2$.  Then instead of tracking the elements themselves, we are simply counting the number of elements that are greater or less than the current $latex s$ value.  Additionally, we have the sets tracking the $latex k$ elements that are largest but still less then $latex s$, and the smallest but greater then $latex s$.  As we iterate through the set, we can track the $latex k$ values in $latex \alpha, \beta$ and the size of $latex \alpha, \beta$ accordingly, and when we want to change the value of $latex s$ to keep $latex \frac{|\alpha|}{|\alpha| + |\beta|} \approx n$, then we just take the new value from $latex \alpha$ or $latex \beta$ respectively and update the cardinality of each set.

An additional extension to this algorithm could be for an $latex n \approx .999$ then the size of $latex \beta$ would only be $latex \frac{1}{1000}$ the size of the original data set.  Keeping track of largest $latex .001$ of the data set would not be linear in the side of the data set, however it could take out a considerable chunk of computation depending on how large or small the value of $latex n$ is.

Reducing specific use cases in a language to improve overall usability

Posted on 26 August, 2014 in ilang by

This last summer I spent a considerable amount of time refactoring i-lang.  Since I started implementing this programming language in early 2012, it had accumulated quite a bit of cruft, and it was difficult to continue moving forward.

Refactoring type system

One of the first internal improvements that was made was to overhaul the internal type system.  Before, the type system was simply passing around a boost::any.  However, this became trouble some as all parts of the code would have to know about each type so that it could cast it locally.  In many places the code began to look like:

if(a->type() == typeid(Object*)) {
 a = boost::any_cast<Object*>(a);
 // ...
} else if(a->type() == typeid(Array*)) {
 // ...

It became even worse when there were two different types involved, as can be seen in the case of performing arithmetic.

Now, the type system has been rewritten to better make use of C++ template and virtual function systems.  This means that one can write code like:

ValuePass a = valueMaker(1);
ValuePass b = valueMaker(2.4);

ValuePass c = a + b;

assert(c->cast<float>() == 3.4);
assert(c->cast<int>() == 3);
assert(c->cast<bool>() == true);
assert(c->cast<std::string>() == “3.4”);

The real beauty of this type system can be seen when using the foreign function interface, where the value of arguments can be “injected” into local variables.  This means that a function can be written as:

ValuePass func(Arguments &args) {
 int a;
 double b;
 std::string c;
 ilang::Function d;

 args.inject(a, b, c, d);

 // The arguments are casted to the type of the local variable
 // in the case that a cast fails, an exception will be raised
 // which is the logical equivalent to calling a function with
 // the wrong type signature

 // ... (body of function)

 return valueMaker("hello world");

Changes in the type system at the language level

Before this refactor, types in i-lang were defined in a global table of identifiers called variable modifiers.  A variable could have more than one modifier attached to it, and each modifier is used to check the value being set to a variable.  What this roughly translates to would be something like:

define_type("Unsigned", {|a|
 // check if the value being set, which is passed as argument a, is greater or equal to 0
 return a >= 0;

// use this type like
Unsigned Int b = 5; // both Int and Unsigned are called to validate the value of '5' is the correct type.

Looking at this implementation of a type system, it does not seem that bad when compared to other programming languages.  As displayed here it is missing the concept of a namespace or import scope, but otherwise it is fundamentally a type system where types are given names and then later used to reference that type.  However, this concept of a type having a name fundamentally goes against i-lang’s concept of names only being used as place holders for values, vs having an explicit places in the language. (eg: class required_name_of_class {} vs name_bound_for_use_later = class {}).  This lead me to question what does a type system fundamentally do.  In lower level languages such as C/C++ a type system provides information about the space required for an object, however in a higher level language such as python (which i-lang is more similar to on this point) values are just fixed sizes and then pointers to larger dynamically sized objects when required.  Type system also provided casting between primitive types, such as a 4 byte integer casted to a floating point.  This on its own isn’t that interesting as there are limited number of primitive types and similar operations can be accomplished with code like `1 * 1.0` or `Math.floor(1.2)` for casting.  Finally, type systems provide a way to identify the type of some value, which can be further used by a language to provided features such as pattern matching when calling a function.  Now, choosing to focus on this last issue of a type system lead to i-lang concept of a type system, which is that a type is simply a function which can identify if a value is a member of a given type.

The idea of using just a function to identify a type can sound a little strange at first, however, after playing with it some, the idea itself can be seen to be quite powerful.  Here is a quick example of using this type system to implement pattern matching on the value passed to a function.

// This is a function which returns a function that compares the value
// the returned function gets with the value the first function was called with.
GreaterThan = {|a|
 return {|b|
  return b > a;
LessThan = {|a|
 return {|b|
  return b < a;
EqualTo = {|a|
 return {|b|
  return b == a;

Example_function = {|GreaterThan(5) a|
 return "The value you called with is greater than 5";
} + {|LessThan(5) a|
 return "Then value you called with is less than 5";
} + {|EqualTo(5) a|
 return "The value you called with is equal to 5";
} + {
 return "The value you called with didn't compare well with 5, must not have been a number";

Int GreaterThan(0) LessThan(10) int_between_zero_and_ten = 5;

In the Example_function, we are combining 4 different functions, each with different type signatures.  Additionally, we are creating types on the fly by calling the GreaterThan/LessThan/EqualTo functions which are using annoymious functions and closures.  This method also allows for classes to have a place in the type system.  We can now easily create special member of a class to check if a value passed is an instance or interface of the class type.

sortable = class {
 Function compare = {};

// check that the array members all implement a compare function and are an instance of a class
sort = {|Array(sortable.interface) items|
 // ...

Refactoring Objects and Class to appear like functions

Before, i-lang use syntax similar to Python or JavaScript dicts/objects when constructing a class or object.  This meant that these items looked like:

class {
 Function_to_check_type type_name: value_of_type,
 Another_type another_name: another_value,
 no_type_on_this: value

However, in ilang, except when prefixed with `object` or `class` the use of `{}` means that it is a function.  (eg: a = { Print("hello world"); };)  Additionally, colons are not used anywhere else in the language, which made me question why was this case such a special one.  This lead me to ask why not use equal signs and semicolons like everywhere else, meaning that defining a class would appear as:

class {
 Function_to_check_type type_name = value_of_type;
 Another_type another_name = another_value;
 no_type_on_this = value;

Furthermore, is there any reason to exclude loops and if statements when constructing a class?  Allowing control flow when the class definition is constructed makes this act identical to a function.

a = true;
class {
 if(a) {
  b = 5;
 } else {
  b = 6;

Final thoughts

By starting by cleaning up the internals of i-lang, it allowed me to take another look at the language and determine why certain choices were made at first.  Bootstrapping a new programming language takes a considerable amount of effort, and could easily lead someone to defining something like print a function (as python recently changed away from in version 3).  In my case, I was creating special mechanisms for constructing class/objects and defining types for variables largely because the type system, scopes, and function internal interfaces were all poorly designed in the first iteration of the language.  Now that the internals have been cleaned up, it makes it easier to see that these changes are wins for the language.  Now, I doubt that I would have been able to come up with these changes right off the bat with the first implementation, as it was only through the pain of the first implementation for which the need for these changes became apparent.

Current state of ilang

Posted on 28 January, 2013 in ilang by

This is the first week back at school, which means that the speed of development on ilang will begin to fluxuate again. Over this last break, and the final parts of last semester, I was able to clean up a bunch of features with the ilang programming language.

When I originally set out to create this new language, I was mostly thinking about how the syntax would be different and the specific features that I would want in the language to make it worthwhile of being created. However, what I failed to thinking about was what really make a programming language useful today is the fact that there are an absurd amount of libraries already programmed, debugged and available for download for any successful language. As a result of this realization, I have been working on trying to get useful libraries written for the language. However in trying to work with the language, I have a few beliefs that I am trying to stick with, but I am not so such how well it will work out.

The first belief, is that there should be no need to pause or have any sort of timer system, this is because I feel as if the language should attempt to run as fast as possible and focus on processing data. However when writing testing frameworks to automatically check if the language is working, it has become apparent that it would be useful to have some timer system. ** I still haven’t written in the timer system so this is still somewhat of an issue of internal debate.

Along with the timers, there is the problem of getting data efficiently in and out of the programming language. One of the concepts that I have for the future with this language is that the system will be able to distribute the computations between a large number of computers, this means that it is not particularly natural for the system to have access to features of the local computer, such as the file-system or the standard in/out. I am thinking that for the time being that the system could be designed to have access to the standard input and part of the file-system, however when the system becomes networked across a lot of computers, there could possibly be a way to specify where the standard input/output should go along with where the file-system has access to. The other alternate that I am working on, is using the concept of just having the http server be the way to input data, however I expect that it will become cumbersome quickly to input large data files. A possible compromise is to use the parameters to support some way to map specific files or folders to names that can be access from inside the program.

When considering the libraries that have already been included, there is still a lot of space for development. The modification library is still lacking too many features to be really usable. Along with the modification, the database library, still lacks the ability to save arrays into the database. The complication with arrays is trying to figure out an efficient way to store the data without causing a significant slow down. My plan for arrays in the database, was that they would be “smaller” then objects in the database, as objects when stored in the database do not have any limiting factors for the number of elements. However with arrays, I plan to try to have the system load all the data into memory when reading an array from the database. However the current way that the system is designed, it does not allow for the elements, to be easily accessed under the hood. I am thinking that the system might try and store all the elements in their own container, however the problem with this is that there would be a large number of database queries when trying to read data out. And inserting in the middle of the array would require a large number of read and writes into the database. However, on the flip, if the system was using one database object to store all of the array elements, there would be a few read and writes, but the object would likely be come very large very quickly.

The current plan for future features, is to keep working on adding more libraries into the core system to be useful. This will mostly focus on processing and interpreting data. (Web server, http client, as well as methods to parse string such as regex, also expecting some form of CSV or sqlite reader for when working with data that has been downloaded). Along with supporting reading the data, I plan to attempt to include a wide range of machine learning and artificial intelligence libraries and tools. Hopefully, it will be easy enough to integrate their database systems with the ilang database. Once these components are in somewhat of a usable state, I should have some framework for which experiments with the code modification library can be conducted.

Random last thought:
I currently plan for the ilang system to have a formatting tool, much like golang does. The reason for this, is when working with the modification system, I plan to have the system completely regenerate a file using the system “print from code tree” feature. This should greatly simplify writing the code back to disk, when compared to other possible ideas such as trying to find where the code has been changed with corresponding lines and then try to recreate those changes on disk.

Job fair

Posted on 22 September, 2012 in us by

Wednesday of this last week, I went to a EECS job fair.  I found that essentially all companies there were eager to talk with anyone that came by and take a résumé.  I have even gotten some contact back from some companies already which I was not expecting as I am a first year student and I was told by many older students that first years do not typically get contacted or get internships/jobs.

I think that this point brings up some very interesting beliefs that are in the tech industry.  Many of these have been noted before on countless blogs and new articles, but rehashing these from my own experiences I believe might be helpful to some people.

  1. The tech industry does not particularly care about you age, gender, race etc.  All they care about is if you are technically skilled and are able to get the job done.
  2. Github profiles are a big deal.  At the top of my résumé along with my physical address, I decided to put my internet address.  This included things such as my email, website and github profile.  I want to note that even while talking with an individual he was looking at my résumé said “o nice you have your link to your github profile” and then continued to circle it with his pen and said that he was amazed how many people he talked to that did not have some public code profile.  Today this “public code profile” has become a standard for hiring in the coding world.
  3. Do not emphasis what you do not have when talking with the representatives.  I was waiting behind some student who was talking with the hulu representatives about what he has.  First he starts out with what he does not like about the hulu product, the fact that there are ads even though he is paying for it (guess what you pay for Cable and there are still ads, there is no reason hulu can’t do the same).  The representatives then interrupts him and asks about what sort of projects he has.  He states that he has made a few smallish things.  The representatives then continues to ask if he has a github (see point 2).  Which he replies that he does, but there is nothing on there because…..some answer like, “my ideas are soooo great that I do not want people coping them I might sell them at some point…..”

These are somewhat of tips/points/what not to do experiences.  Like I said at the top, these ideas have been noted all over the internet and are not rocket science.

Additionally, in line with my last post about hackathon projects.  Everything that you write should be version controlled somehow.  You can use git without using github and just keep it on your local machine.  Additionally, when you decided that your code is either “done” or not going to continue into a money-making company, or only going to survive as a free product, then you might as well create a public repo on github or similar so that if/when you are at a job fair, there is something on

The Hackathon paradigm

Posted on 15 September, 2012 in Programming, us by

Today I was looking at a lot of the different applications that I normally use on my phone and through my web browser.  If I was talking to someone who had never experienced either of these before, they might believe that I generally have a single specific device for a specific task, and that in terms of functionality there would be little overlap of major features.  However, for anyone that has experienced either of these mediums, they are aware of the wide variety of applications and programs that duplicated the functions of other applications.

My complain is two-pronged on this issue of applications that start or continue with a Hackathon paradigm.  First, the old Unix philosophy says do one thing and do one thing well.  On this specific point, I believe that many applications start out with the right intentions, however over time there is a significant feature creep effect that takes place.  I believe that this is the result of “Hackathon project” becoming more than “Hackathon projects.”  The developer of these application feel that they are going to form a company with a project that in terms of complexity should really be no more than a side project.  Essentially what I am saying, is to develop and maintain your application X, it _might_ take 2 hours a week once it is launched.  However, these individuals choose to attempt to make a 50 hour, startup styled, work week out of these types of projects.

My second issue with the “Hackathon projects” is don’t assume that something that you can write in 24 hours is not easily copied.  There are a lot of very complex and difficult problem that exist in the world today.  Nearly all of these types of problems can not be solved in a short period of time.  Additionally, if a product can be made in 24 hours given the proper tools and skills, then it is only a matter of time before there is a large number of people who will be competing with you.  Some might even have better products given that they were able to replicate your product in such a short period of time, and then vastly improve upon there after.

With these two issues, I am not saying that Hackathons are bad,Hackathons provide a relativity easy way to create demos to show of skills.  However when it comes to publishing a product I believe that people should think a little more about what they are going to create and invest enough time into the product such that it is not just going to be another 1 of 100 “identical” products.

ilang – a new programming language

Posted on 8 September, 2012 in ilang by

I have been working on developing a new type of programming language over the last few months.  From a physiological perspective, it is interesting to try to create ones idea programming language and see what they create, what features does one add, change or remove.

ilang is still very pre-alpha-ish software, and I don’t expect anyone to jump and start using or even download and try it at this point, there are still too many things unimplemented at the moment to call it a complete language.

An overview of what is different:

  • The power of anonymous.  In may programming language, functions, classes and may other types are given names that can be looked up inside the type system.  However in ilang, attempts to have classes and function and other basic types anonymous, or without names.  The names are viewed as being useful to the programmers who are writing the programs.
  • Short access to making function: a function is anything between {}.  this means that to create function main, it looks like: main = {};
  • optional typing: This seems to be a new and growing trend in programming languages that are coming out now.  By default the types on variables are not checked at all.  This means that more than one check can be imposed on a variable.  Also additional types can be easily encoded with some additional C++ code, and soon ilang code from within the language itself.  The type checking can also do some other interesting things, more later.
  • Built in database: This has always been a feature that I think higher level languages should include, now web browsers include localStorage for example.  This feature can already take all primitive types and objects.  classes and functions can not yet be encoded into the database.  However the advantages having this built in as already noticeable in testing.
  • Python style imports, but only at the top and no ‘as’, *.  I originally made it this way, because I was annoyed when some code I was reading through would import something in the middle.  One you have to find where the import was performed to figured out what is being included into the source, also if you go back to modify above the point where the import was performed then you have move the import up so that it will be available.

To come/planned/supppppper pre-alpha features:

  • Access to the parse tree of files and the ability to modify the tree, changing the code that is running.  There will be the typical system where it is able to access the direct parse tree in a ‘raw’ format, however I plan to experiment some and try and find some natural way to access and modify the syntax tree.  In the aspect of natural modification, I have already noticed some of these properties being easily implemented as a function can easily be replaced by simply overwriting its value.
  • Network distribution.  I am hoping to turn this language into something that is useful when it comes to processing large amounts of data.  The trend at this point has been to utilities a large number of computer and attempt to distribute tasks in a sort of map reduce framework.  The plan at this point is to allowed for unstructured computation across the network where the system automatically determine if it is most effective to move a copy of the code for the computation or to move the data that the computation is working on.

Very incomplete documentation

Link to Github repo

This is only the first intro post.  I believe that there will be more to come as the language further advances and develops.

Comments about another state

Posted on 28 July, 2012 in Summer? by

This post was written over a number of days in a disjoint fashion. You were warned.

Already in the first days of this vacation I have come up with a number of amusing comments and I felt that it was only right that they be recorded somewhere.

First the air port that we flew into had something like two terminals. I was joking that there would be one person working TSA and when we came back through they would simply look at us and ask if we are going to blow up the plane and then let us through. I was somewhat close in that there were two people working TSA, however they did appear to have the basic requirements to be called TSA.

Second when we got in the car the radio came on and started playing the most stereotypical country music came on and started saying “she thinks my tractor is sexy.” The next time that we turned on the radio it was a song about loving their front porch more than anything else.

The place that we are going to for our volunteer activities was considerably farther away then originally expected. As a result my mother was driving somewhat above the speed limit. As she claims this is her first ticket in a while, gg. No California roles here.

We are the volunteering on the blackfoot reservation this last week. It has been somewhat slow, it appears that it has something to do with the native culture. They believe that it will end up working out and stuff.
The main thing that we have been doing this week is helping the community college get themselves together for their next semester. They are quite small with only 500 students and a few classrooms.
When we are not helping Getty the college ready, we have been interacting with the community.

Today we are leaving the volunteering group. We are going to be driving up and around to do a ‘normal’ vacation.

A review of a year

Posted on 18 June, 2012 in me by

Looking back, it has been a while since I last updated this blog.  I think this is because it was a crazy fully packed year.  Additionally my time was consumed by great projects such as EDD of which we worked in secret to prevent our opponents from discovering what we were doing.

Looking back this year start like a typical senior year.  There were good and there were bad classes all wrapped up.  Thinking back to first semester I was talking 8 periods worth of classes filling every second of my school day.  On top of the classes there was the constant need to work on college applications and the constant draw to manage the EDD project.  Just thinking about this now is making me tired.

Second semester was not any less tiring than the first.  While I decided to take one less class giving myself some free time during the middle of the day, FRC and EDD started their full on attack.  During the six weeks of the FRC season there was a lot of complications.  For starters there was the complications of figuring out how to shoot the ball.  Additionally, there was the complication of balancing on the bridge at the end to get the absurd amount of bonus points.  In an attempt to accomplish this we built a challenging frc robot.

At the same time as the FRC build season, there were many complications in the EDD class.  We were having to save the rover team as a joint effort between the two EDD teams.  The rover was designed by members of both teams to be the robot that we were going to rescue during the mission, but what happened is the rover team was unable to accomplish their mission of building the rover and thus required the attention of both teams to ensure its success.

After the build season for FRC, EDD started stepping up its game.  We started really working on our countless iterations on our frame designs for our sphere bot and adapters that we were working on for the propellers.  It was during this time when we discover the complications with the propellers and lift.  Our complications stemmed from the beginning with the RC community and not a full understanding of aerodynamics.  According to the RC community if one were to spin the propellers infinitely fast there would be an infinite amount of thrust.  Of course this does not make sense, but the point at which the equations were said to break down is where the pressure above is not able to replenish the air quickly enough to provide lift.  What we did not know is there are also the limits of the propellers flattening out which causes the propellers to not provide more thrust at a higher speed.  At this point we started our many iterations in an attempt to find possible solutions to our problem.  Out first attempt was to greatly reduce the mass of the robot, we were able to cut the weight in half.  Second we attempt to the most complex parts to date in our machine shop, the manufacturing of our own propellers.  Additionally we looked into making a larger frame, as going to propellers that were only 4″ larger in diameter were designed for 1.5 Kg vs 200 g helicopters which was a lot closer to the range that we were looking for.  We tried all of these designs in parallel which was a stunning effort and props (pun intended) to everyone who contributed.  In the end there were still complications with getting the flybar working, and at the last second our last flybar broke and we were unable to replace it.  In typical EDD fashion, we look back and say if only we had another month we could have accomplished what we set out to do (or if only we did not waste 6 weeks attempting to build the rover then maybe the sphere would have flown).

After all of the academics and robotics for the year was over, there were what now seems like an eternity of senior activities and gradutation to sum up the year.

First day of FRC robotics

Posted on 15 March, 2012 in Robotics by

Today is out first day of the robotics competition.  There is still a lot of work that needs to be done.For starters, the robots drive system needs to be rebuilt.  What happened when we shipped the robot was they were unable to find the right size gears.  Second the intake system needs to be taken apart and have is belts put on.  Finally the bbad needs to be installed (we just finished machining 2 days ago).
Once the mechanical team finished the electrical and programming team get to go to work.  The robot has not even begin to be weired.  Additionally programming team has written a bunch of code hopping that robot will work, but because the robot was “finished” only minutes before we bagged it there was no chance to test.
All in all, if the robot I’d working by the end of Thursday, it is going to require a miracle.


Posted on 6 November, 2011 in Uncategorized by

Before I start, this post is about the Obama administrations’ new online voting system:

The Obama administration has set “We the people” as a new service that allows for citizens to get involved in government.  All one needs to do to participate is click create an account at the lower right hand corner of the page.  Once one has created an account one can show support of a petition simply by clicking vote on this petition.

Here are a few petitions that I believe are worth note and votes:

Abolish software patents
Today there are thousands of bogus patents in the system for different software ideas. One time I was looking through Google patents to see how bad software patents really were, and I found a patent for a watch dog timer that was from 2009. I then went to look up a watch dog timer and found (as I suspected) that watch dog timers have been around since before the 90s. Continuing this search, I found two patents in the same generic Google patent search that were infringing on each other.

Freedom of the Internet
This petition is against the E-parasite act that congress is currently attempting to get passed. What this act is basically set up to do is allow for BIG companies to take over the internet. For example, if a large corporation determines that you are doing something “illegal” then they can get you thrown off the internet. This bill will also allow for websites to be thrown off the Internet in much the same manner. A number of security experts have said that this bill will enable a major vulnerability in the Internet as it will enable a “group” to decided what should not be allowed and modify the DNS records (if abused you would not be able to ensure that you are on the site you think you are on, for example you bank website could be replace when this technology is abused).

Funding NASA
Science and technology is constantly getting its funding cut when times are bad. NASA is a prime example of a successful origination getting its funding cut. This should not be allowed and NASA should be funded.

Eliminating vibration on the Perfect Horizon

Posted on 2 August, 2011 in Summer? by

For this summer I have been interning with Motion Picture Marine which designs and builds the Perfect Horizon.  The prime use of the Perfect Horizon is to keep a camera level with the Horizon.  This is especially important when working on boats and shooting off into the distance.  In the near future it is also expected that keeping a camera shot level with the horizon will become even more important because 3D moves on average have shots that are twice as long, and any movement in the image is many times more likely to cause motion sickness of the audience.

The way that the Perfect Horizon works is by compensating by movements of the boat or moving vehicle using two motors, to move the platform in the opposite direction.  The problem comes from when the Perfect Horizon is not mounted on a stable platform.  This is because when a 100 pound camera is mounted on top, there is a large deal of inertia, and thus the platform underneath ends up getting pushed.  If the platform can be moved even a little bit, then the Perfect Horizon will detect this and attempt to compensate as it was designed to do.  In turn this causes the system to begin to vibrate.  (This problem can easily be seen on a crane arm, or any poorly designed mount)

This problem can basically be broken down into two major components.  The first being the detection of the vibration, without miss detecting normal operation.  And the second is modifying the internal parameters of the system to prevent it from vibrating.

For my first attempt at detecting the vibration I just attempted to use simple signal processing filters.  This ended up working on my test rig as it was a stationary platform.  In moving to a moving platform I began to notice some problems with the solution.  The first being that it would miss detect jolts or changes in motion as vibration.  This was simply unacceptable as this would degrade the quality of the system and prevent it from doing the job that it was designed for.  With some internet research, I found out that jolts are visible on all frequencies while vibrations are only on a select few.  (link)  This lead to using fourier transform for processing the signal.  What a fourier transform basically does is it converts signals in the time domain into signals in the frequency domain.  This basically means that when looking for a vibration, we need only check for outliers in the set of data.  This has turned out to be a better solution, and combining it with simple check to ensure that there are enough samples that indicate vibration there has not been any false positives as of thus far.

The second problem came in with modifying the gain of the system to prevent it from vibrating.  I believe that part of the problem with the vibration comes from the fact that the system produces thousands of tiny jolts while moving, this is because the system will determine where it wants to move to, and then it will move to that point up to a maximum speed.   Once it has reached that point, it will stop and wait until the next sample is ready.  The solution here is to slow down the motors so that they do not reach the end before the next sensor reading.  The obvious answer of just always slowing down the motors will not work, as then the system would not be able to deal with the larger movements because the motors would be moving too slow.  In implementing the solution the motors are only slowed down when the system determines that it is vibrating.  This essentially gives the system some form of adaptive control over the internal parameter used in the system.

Recent downtime

Posted on 7 May, 2011 in JSApp.US by

In the last 24 hours, JSApp.US was down for the majority of the time.  The problem started with a power outage at the Linode hosting facility.  This brought the server down at first.  The system then had a problem getting back up as there was a super block failure on the root file system (not the database).  Linode continued to do maintenance on the system, and it is currently my belief that there was underlying problems causing the issue.  Earlier today I was able to get back into the system, but it was having problem still booting up into the normal system.  After some more work with the lish interface and loads of reboot attempts the system is now running again.  I have tested the system to ensure that it will reboot into the correct configuration, and it seems to be ok at the moment.

Reflections on FRC world

Posted on 30 April, 2011 in Robotics by

So this was my first time at FRC world.  It ended up being a lot of fun, and was very enjoyable to be able to interact with other FRC teams, and see a lot of the booths that were from the various suppliers of FRC materials.

First off, congrats to our admin team for winning the entrepreneurship award.  This just shows how much time has gone into this.  Something that winning this award made me realize was that there are hundreds of teams that were at world, and that they were representing thousands of teams.  Just being one of the few special teams that have won an award on the world stage, is a great accomplishment.
Something that did happen at world, was there was a lot of talk with regards to what will happen next year with our team.  We are planning to do a lot of major overhauls to the build side of our team, so that we can produce a quality robot.  There are a few plans floating around right now with regards to the issue, and I think that if we are able to implement some of them that we should be in good shape.

Some of the plans that we came up with that are somewhat interesting.

  1. Build a second robot and practice driving
  2. Start a lot of researched based projects
    1. The first idea is a variable controlled pneumatic system
    2. Kinect integrated into the control system
    3. full on autonomous robot (more just the idea of having the sensor feed back developed)
  3. Build a number of robots during the off-season
  4. Train more members to be able to machine the parts

FRC world 2011

Posted on 23 April, 2011 in Robotics by

This coming weekend is the FRC 2011 world robotics competition.  After this, the 2011 FRC season will be over.  We are hoping to do decently well, as we feel that our robot can perform, but the question is going to be how it holds up against the other world-class robots.

While this will make the sad end of 2011 FRC, it will also mark the beginning of next cycle for our team when it comes to FRC robotics.  Next year we are already planning to get a lot more training in for everyone on the team.  It should be interesting to see everyone getting trained at all the various parts of the build season.  This should differ greatly from what we had this year with a few main people doing large chucks of the work.

I am hoping that next year will be able to be a lot of fun, as there are some thing that should be different from this year.  The first is being that we (as a team) are wishing to build more practice bots during the off-season.  Our goal is three different drive systems and then articulations to go along with said drive systems.  As the current programming leader, I would also like to see more people getting on the programming side of the robot.  I have big dreams with wanting to see a robot that could drive itself completely autonomously.  I am currently thinking that a xbox kinect could be used as the primary sensor for the system.  When in an indoor condition, the kinect is able to give distance data up to 27 meters away, which would be more than enough for decent robot control.  The kinect also packs a decent resolution camera into it.  The main problem with the kinect is that it is USB, and the cRIO has no interfaces for USB.  So my current plan is to use a pc104 like board with USB and ethernet to interface between the two devices.  This second processing unit could also serve as a video co processor on the robot possibly providing a lot more computational power.

Open Source


Visitor count

A quick visitor count powered by JSApp.US