Sep 28 - Oct 5 Journal
Decision Trees:
This week, I continued looking into decision trees with my concept map being conditionals (if/elif/else statements) and also working a bit on the Titanic survivability tutorial. Before the new deadline for the concept map was set, I decided to complete my concept map entry due to other tests later in the week and compared conditionals to ordering at In-n-Out. I explained how conditionals are used by the decision tree to create the branches and categorize the data. I also learned about the entropy (randomness) of the decision tree from Will today with the group. I thought the discussion with Will and the group was very informative but had reservations about the extent to which each person should research individually. My concern was how much individual research is too much so that our skillsets do not become so different that it impedes discussion. A possible area of further research is the random forest trees area within decision trees.
Clustering Algorithms:
Because I finished most of my concept map earlier in the week, I began looking into the different clustering algorithms Dr. Hassibi mentioned last week. The first algorithm was k-means clustering which was explained to be the most popular/basic clustering algorithm by Dr. Hassibi and the videos that I watched. I found some Coursera videos on k-means algorithm done by UW and Stanford (Andrew Ng course) in addition to some presentation slides by MIT (ex: picture below) and Carnegie Mellon. I learned the basics of the algorithm which chose random points as "centers" of clusters and placed points in clusters. The centers were then moved to the average of the cluster and the process was repeated until the center stopped changing.
The other algorithm I looked a bit into was the spectral clustering algorithm that was brought up by Robert last Thursday. The comparison between K-means and spectral clustering is depicted below in a slide from a Carnegie Mellon slide presentation. I plan on looking more into this algorithm next week depending on our other tasks. My idea is that k-means clusters the data with a line while spectral is more into circles (?). Many of the Coursera videos covering k-means also touch on spectral clustering though so I see that as a possible next step after k-means.
Paper Airplanes:
This week, Elizabeth and I completed our third experiment after we singled out the small Hammer as our optimal design. We tried shifting the weight distribution of the small Hammer by applying several layers of tape at the front, middle, and back of the plane. We hypothesized that the weight would angle the plane and therefore shift the lift production amount. I thought that the back tape plane would fly further (be more efficient) due to it being able to deflect more air downward to produce lift but the result was that the middle weighed plane was the most efficient. To rationalize the data, Elizabeth and I drew out diagrams to try to visualize the airflow (depicted below: back-tape, middle-tape, front-tape from left to right). We concluded that weight was likely more influential than wing angle in flight. While flaps on commercial jets add lift because of propulsion, the angled wings on an engine-less plane produce more lift but slow down the plane so that they fall faster.
This week, I continued looking into decision trees with my concept map being conditionals (if/elif/else statements) and also working a bit on the Titanic survivability tutorial. Before the new deadline for the concept map was set, I decided to complete my concept map entry due to other tests later in the week and compared conditionals to ordering at In-n-Out. I explained how conditionals are used by the decision tree to create the branches and categorize the data. I also learned about the entropy (randomness) of the decision tree from Will today with the group. I thought the discussion with Will and the group was very informative but had reservations about the extent to which each person should research individually. My concern was how much individual research is too much so that our skillsets do not become so different that it impedes discussion. A possible area of further research is the random forest trees area within decision trees.
![]() |
Learning Decision Trees Entropy from Will |
Clustering Algorithms:
Because I finished most of my concept map earlier in the week, I began looking into the different clustering algorithms Dr. Hassibi mentioned last week. The first algorithm was k-means clustering which was explained to be the most popular/basic clustering algorithm by Dr. Hassibi and the videos that I watched. I found some Coursera videos on k-means algorithm done by UW and Stanford (Andrew Ng course) in addition to some presentation slides by MIT (ex: picture below) and Carnegie Mellon. I learned the basics of the algorithm which chose random points as "centers" of clusters and placed points in clusters. The centers were then moved to the average of the cluster and the process was repeated until the center stopped changing.
The other algorithm I looked a bit into was the spectral clustering algorithm that was brought up by Robert last Thursday. The comparison between K-means and spectral clustering is depicted below in a slide from a Carnegie Mellon slide presentation. I plan on looking more into this algorithm next week depending on our other tasks. My idea is that k-means clusters the data with a line while spectral is more into circles (?). Many of the Coursera videos covering k-means also touch on spectral clustering though so I see that as a possible next step after k-means.
Paper Airplanes:
This week, Elizabeth and I completed our third experiment after we singled out the small Hammer as our optimal design. We tried shifting the weight distribution of the small Hammer by applying several layers of tape at the front, middle, and back of the plane. We hypothesized that the weight would angle the plane and therefore shift the lift production amount. I thought that the back tape plane would fly further (be more efficient) due to it being able to deflect more air downward to produce lift but the result was that the middle weighed plane was the most efficient. To rationalize the data, Elizabeth and I drew out diagrams to try to visualize the airflow (depicted below: back-tape, middle-tape, front-tape from left to right). We concluded that weight was likely more influential than wing angle in flight. While flaps on commercial jets add lift because of propulsion, the angled wings on an engine-less plane produce more lift but slow down the plane so that they fall faster.
![]() |
Airflow in pink and plane in purple drawn out to rationalize data |
Comments
Post a Comment