Skip to main content

4 Things You Must do When Taking a Proctored Exam

This was my proctored-exam transcript last night when I took the AWS Solutions Architect exam (SAA-C02) as part of my recertification because it expires every 3 years. This was a remote proctored exam through PSI and should have been easy because this wasn’t the first time I used their service. For whatever reason, last night’s experience was so bad  that an exam that started at 10:45pm didn’t finish until early morning 2:15am.  Some things that you could do to avoid what I went through: 1) Test your computer 24 hours before start time This was my fault and I should  tested sooner. I did start preparing foe the prerequisites as early as 8pm and thought I had enough time to complete everything as it was the same computer I used three years ago when I got certified with AWS. However, I found that since the last time I took this exam, my hardware changed (different web cam, different speakers) and I installed more software like Docker that the proctoring software (PSI Secure Browser) didn

60 Days of Udacity: A Bertelsmann Technology Scholarship in AI

I applied and was recently awarded the Bertelsmann Technology scholarship where a group of students take part in an Artificial Intelligence track made up of 5 parts to be completed in 3.5 months.

 

 As part of taking the class, we have to take part in a slack channel where we post our daily studies for 60 days reflecting on what we have learned. This is a transcription of those 60 days. The public github wiki is located here https://github.com/chromilo/udacity-bertelsmann-scholarship/wiki 

Day 1: I am in p3  (Datasets) doing the xray annotation project. I have created the appen job using the "Image Categorization" template. I uploaded the xray image data and modified the CML to make the  questions specific to checking for pnemonia. Also updated the Examples section. I am still working out the usage of conditional only-if in checkboxes to determine what other smarts to include when annotators go through the page. Created one Question so far and will continue working on the other questions next (this seemed longer than 30 min so I'll continue tomorrow, have to make the 60 days last). I updated the files on my github here https://github.com/chromilo/udacity-bertelsmann-scholarship #60daysofudacity

Day 2: I just finished setting up my TopGreener smart plug IoT device to automate turning on the outdoor Christmas lights when sunset hits. I believe it connects to the weather network to estimate when sunset is for my city. That api call is binary but the astronomy service used on the other side may be using AI/ML to correctly annotate when sunset is for a location (I doubt they use human annotation). This service is one example https://services.timeanddate.com/api/services/astronomy.html  

I updated my notes here on my GitHub wiki https://github.com/chromilo/udacity-bertelsmann-scholarship/wiki

  

   

Day 3: Updated the appen job with 5 questions for p3 project. What is the ratio of questions to the 117 total dataset that is needed to generate an unbiased result set? My draft project proposal documentation here so far https://github.com/chromilo/udacity-bertelsmann-scholarship/blob/main/p3-creating%20dataset/project-proposal-xraydataset.pdf #60daysofudacity

Day 4: Started with p4 lesson 1 and completed the two quizzes there. I went back to p2 and realized I missed the quiz in section 11 (Unsupervised vs. Supervised Approaches) so I completed that as well. I think I missed it because I did the lessons from my phone initially, which makes it harder to see all the material, instead of doing them from my desktop. I also added another question to my p3 project in Appen dashboard for a total of 6 questions to ensure I include a test case for every 19 data points, i.e. 117 / 19 = 6+ questions in total needed. I have updated my notes here https://github.com/chromilo/udacity-bertelsmann-scholarship/wiki  #60daysofudacity

Day 5: Finished p4 lesson 1 including confusion matrix quiz. Starting to see AI practical uses everywhere including this robotic pool cleaner at YMCA. I think this pool cleaner is similar to the iRobot vacuum cleaner in that the same sensors are used to sense wall obstructions. “Transfer Learning” ML is probably used here to copy the pre-trained network for the first few layers but the output layer  is tuned to work in a pool environment like checking for mold for example. The YMCA pool is still closed due to covid but am excited nevertheless to see some activity on the pool deck.

Updated my github wiki notes here https://github.com/chromilo/udacity-bertelsmann-scholarship/wiki  #60daysofudacity

  

I was just googling pool cleaning robots and the embedded systems processors they use needing low power to work underwater in order to not electrocute the water. Don’t know how TinyML works but if it modifies code size then power consumption may increase. Will be reading up more on TinyML.

Day 6: Started training my model for p4 lesson 2 project. I signed up for a Google Cloud Platform free-trial for 90 days and $390 to spend. This trial expires in Mar 2021. Created an AutoML Vision classification project and setup a standard Google Cloud Storage free-tier account in the us-west1 region (single region only) which will be used to store the CSV dataset that I upload to GCP. I picked 100 Chest X-Ray Images each for normal/pneumonia for my new AutoML dataset and uploaded the zipped file to GCP which is now importing the images into the dataset.

Day 7: Completed the first model "Binary Classifier with Clean/Balanced Data" for p4 lesson 2 project.  The wording "score threshold" appears to have changed to "confidence threshold" on the GCP Vision dashboard so that confused my a little. The "TEST&USE" tab also shows as "Predict" in the lesson graphic which is incorrect. I don't see "Upload Image" so I assume I have to click on "Deploy Model" to actually run the test. This is where it starts incurring costs. So far I completed the AutoML Model Report for Binary Classifier with Clean/Balanced Data only.  I will add 3 additional models to the project next, with cleanunbalanced, dirtybalanced, and 3class datasets.

Day 8: Created another dataset containing 100 "normal" images and 300 "pneumonia" images to train the "Binary Classifier with Clean/Balanced Data" model. Also prepared the zip files for the other next two datasets, i.e. dirtybalanced and 3class. Just waiting for the cleanunbalanced dataset to finish training which could take a few hours. I entered codewars and finished a Kata in python. Currently considering https://adventofcode.com/ challenge for Day 4. 

Day 9: Completed training the remaining 3 datasets for the cleanunbalanced, dirtybalanced, and 3class inside GCP. I recorded the confusion matrix for each one and answered the questions asked in the project report for each model. I am still working on how to correctly identify the binary classification (positive vs negative) for a matrix with 3 classes. The F1 score is also not calculated yet so I believe I need to upload a new dataset that only includes normal and one of the pneumonia classes in order to get a correct confusion matrix established. I'll do that next. I also completed two python challenges on https://adventofcode.com/ with the results posted on my github here https://github.com/chromilo/adventofcode. I might also setup my Raspberry Pi 3 in order to run some kaggle challenges next.

Day 10: Uploaded new datasets to my GCP project to train and evaluate. I split the 3class model to include 3classviral and 3classbacterial comparing each one to a normal dataset to ensure there is less confusion. Calculated the F1 score given the new datasets. I have completed my project report now for p4 lesson 2 and am looking to figure out how to do some peer-reviewing if anyone is interested. I posted a question on a uopeople subreddit asking what their process is because grading of projects and assignments are done by their classmates. I asked if they use any digital rights management product to protect the projects but haven't heard back yet. 

Day 11: Started p5 lesson 1 on "Measuring Impact and Updating Models". Got to item 13 or 76% done. I followed up on peer reviews with uopeople and it seems they use proctoru which isn't free. An alternative is using Google Drive to apply rights management to the project document to prevent copying, printing, downloading of that project. This also would require a Google account so plagiarism can be tracked that way. Just some thoughts.

Day 12: Drafted goals and objectives for study group #sg_pinoi_and_pin_ai for my filipino peoples. Also added a poly asking for a good time to meet. Read some of the SG Toolkit to get some ideas. Also lso reviewed first half of p5 lesson 1.

Day 13: Completed all the p5 lessons and quizzes. Started draft of capstone project but am still unsure of what to use for business proposal, whether I use an actual use case we can use at work, one I can use personally at home, or something that we can use at school for this udacity course.

Day 14: Continued working on capstone project. Also tried to setup some last minute calls with #sg_pinoi_and_pin_ai study group but will likely continue next week as it's Christmas in the Philippines now. 

Day 15: Reread p5 lessons and updated my github wiki https://github.com/chromilo/udacity-bertelsmann-scholarship/wiki. Trying to find another free ML service as an alternative to GCP AutoML as I have only $200 credits left. Planning to use the Titanic dataset from Kaggle. 

Day 16: Created a new Google colab notebook to test training the Titanic dataset from the Kaggle challenge. Still undecided on business case for p5 project

Day 17: Reviewed great summary notes from @Winsome Yuen for p5. Trying to get train.csv dataset working from the Kaggle titanic challenge. 

Day 18: Will use company mid-year real estate prospectus generated by R&D team to come up with a capstone project proposal. Given the COVID pandemic is unlike any other pandemic, forecast the service-oriented jobs most heavily affected in the coming months using AI/ML to estimate US: https://bit.ly/3hDB8v3 CAN EN: https://bit.ly/2Y8Bulz 

Day 19: Close to fixing the titanic dataset for the Kaggle challenge using Keras API to train, evaluate, and predict survivors from given test set. Just don't know how to set the value for the label for the test dataset so I joined the kaggle study group. I also setup a kickoff meeting for #sg_pinoi_and_pin_ai study group tomorrow afternoon at 3:30pm Pacific.  

Day 20: Finished first page of capstone project. Updated SG charter with meeting minutes from today.

Day 21: Finished and submitted my first Kaggle challenge using Titanic datasets, detailed in my github repo here and wike here. I used keras on top of Tensorflow, dataframes in pandas, and numpy Python libraries. I submitted a survivor count of 184 out of 418 which is 0.74162 accuracy from Kaggle leaderboard. Opened another Zoom bridge at 10:30am Pacific today for our #sg_pinoi_and_pin_ai study group. Will start with project peer-review initiatives next week hopefully.

Day 22: Gathered the three project rubrics and working to assign points to each row in preparation for project peer grading in study group #sg_pinoi_and_pin_ai. Head on over to that channel to get yours graded mga Pinoy and Pinay. Happy new year to all

Day 23: Updated study group charter for #sg_pinoi_and_pin_ai with 2 initiatives. Customized project 1 rubrics for peer grading next week. Updated my personal blog.

Day 24: Completed all other sections of capstone project except for the MVP and post-MVP-Deployment sections. Registered for a free Figma.com account as suggested by @@Winsome Yuen to try and generate a sketch of my product. Trying to figure out how to do that as it's not as easy as it looks. 

Day 25: Updated Project 1 rubrics and scheduled zoom call for tomorrow for #sg_pinoi_and_pin_ai study group. For capstone project, still trying to figure out what to wireframing with Figma because the proposal outcome is not an online website but a PDF report. #60daysofudacity 

Day 26: Completed the MVP and post-MVP-Deployment sections of the capstone project. I was able to figure out how to use Figma wireframing app to generate a layout of business outcome in the form of a line graph. I had to go back to A/B Testing using @Rohit Roy Chowdhury great handwritten notes to review as I couldn't quickly find it in the Udacity classroom lessons. Excited to get it peer-reviewed when we have the rubrics ready from our study group. We have a meetup scheduled today in a few hours time so hope to see some of the members there.

Day 27: Started working on project 2 rubrics for peer review. Updated SG charter and posted meeting minutes from yesterday meetup for our #sg_pinoi_and_pin_ai channel. Joined the featured SG #sg_ai_practioners and read about random forests.

Day 28: Continuing to work on project 2 rubrics.

Day 29: Attended meetup “Learn Data Science by doing Kaggle Competitions” where they looked at prostate cancer detection by looking at cancer tissue pictures.


Day 30: Half-way through Pluralsight course “Building Bots with Microsoft’s Bot Framework”. 

Day 31: Continued listening to the Pluralsight course “Building Bots with Microsoft’s Bot Framework”. Now at lesson 5 of 7, topic is “The Dialog of Bots”. Complicated stuff.

Day 32: Completed project 2 peer review rubrics template for our study group #sg_pinoi_and_pin_ai, and setup next weekly meeting. Finished lesson 6 of 7 of Pluralsight course “Building Bots with Microsoft’s Bot Framework”, topic is "Adding Natural Language Processing through LUIS AI". 


Day 33: Prepared meeting minutes for study group weekly calls. Updated project 1 rubrics for the sg_canada study group.

Day 34: Attended kickoff for mentorship program at BCIT, which I am hoping to use for our study groups as a possible initiative, if the members want to.


Day 35: Continued listening to lesson 6 of 7 of Pluralsight course “Building Bots with Microsoft’s Bot Framework”, topic is "Adding Natural Language Processing through LUIS AI". Did the intro to ML and started reading through the syllabus for CS 7638 Artificial Intelligence for Robotics class. 

Day 36: Finished first out of 4 in the Veracode Secure Coder competition titled "OWASP #2: Broken Authentication" running until Jan 15 midnight. Doing the challenge in python of course. 


Day 37: Started with the CS 7638 Artificial Intelligence for Robotics class and completed lesson 17/37 of Localization Overview module. Also started with CS 6601 Artificial Intelligence class and completed lesson 10/54 of the Game Playing module. Here is my reddit post asking other students for advise on these two courses. 

Day 38: Finished all 37 lessons of Localization Overview module from CS 7638 Artificial Intelligence for Robotics class. Learned about the foundation of autonomous driving and Bayes' theorem. 


Day 39: Installed minconda3 and PyCharm Community 2020.3 to my computer. Finished python quiz 2 out of 4 from problem set 0. Working with classes and dictionaries.

Day 40: Completed 4/5 lessons in the Problem Set 1 module. Working on the python Localization programming quiz for lesson 4. Scheduled our third weekly study group meetup for #sg_pinoi_and_pin_ai happening in 5 minutes.  

Day 41: Completed Problem Set 1 module including the python coded for the Localization Program. Very cool concept using probabilities to a robot's 2D world (lane marker vs road) for Google's self-driving car. Now working to submit via Gradescope. 

Day 42: Started with lesson 4/28 of Kalman Filter module, in preparation for the Meteorites (Kalman Filter) Project due in two weeks. 

Day 43: Completed lesson 20/28 of Kalman Filter module, covering 1D Karman filter code in python.

Day 44: Working on problem set 2 programming quiz, calculating the number of dimensions using matrix class.

Day 45: Submitted problem set 2 to Gradescope, was able to figure out the correct multi-dimenisonal matrices. Started working on the Meteorites Project due Feb 8. The goal is to track a collection of falling meteorites 1) estimate their future locations; 2) defend the Earth from them with your laser turret. I'm a little terrified let me tell you. 

Day 46: Working on the two measurement methods for the meteorite python project. Paid for the tuition before deadline tomorrow.

Day 47: Continue to work on the get_meteorite_observations and do_kf_estimate_meteorites methods for the meteorite python project. Updated meeting minutes from study group.

Day 48: Finally got the get_meteorite_observations and do_kf_estimate_meteorites methods working and am now able to get 4/8 test cases to converge. Still some ways to go before I start working on get_laser_action method. 


Day 49: Completed the get_laser_action method but am still failing on 2/8 cases. I need to further tune the heuristics to prevent execution timeouts.


Day 50: Signed up for Open Athens and linked Google Scholar to GT Library for access to research papers on Localization and Kalman Filters. Have to find and write relevant research paper in 300-600 words due by Monday. Yikes.

Day 51: Found a research paper by Nick Dijkshoorn for his Master thesis about SLAM (Simultaneous Localization and Mapping) with AR.Drone. It covers what we learned in class about using Bayes theorem for beliefs and EKF (extended Kalman Filters) for non-linear problems to localize the drone's in a 2D world. Also contemplating what to do for the hardware challenge. I might buy a premade hw kit allows me to directly access the sensors and actuators via custom code I write. A drone would be nice but could be expensive.

Day 52: Starting on the face tracking hardware project. Ordered from Amazon a ULN2003 Motor Driver board and some 40pin male/female cables for my Raspberry Pi 3 model B board. Hope to use Kalman filters to track left to right face movement using a logitech external web camera. I also have my son's StarWars sphero R2D2 robot that allows for javascript code over bluetooth to teach it and make it autonomous. Unfortunately since it is a premade robot I doubt I can use it for the hardware challenge.

 

Day 53: Started with next chapter on Particle Filters. Apparently it’s more accurate than either histogram or kalman filters. Currently at 13/28 lessons on this chapter

Day 54: Reviewed plans to complete foundation course review during study jam 1.0 and optional projects review from out study group meetup today. 

Day 55: Completed 28/28 lessons of Particle Filters module. Next will look at improving my project 1 defense method as it is still failing 4/10 cases. Downloaded project 2 Mars Glider project.

Day 56: Halfway through Problem 3 programming quizzes.

Day 57: Parts for the hardware challenge arrived. Connected the pi motor HAT and started downloading opencv to RPi.

Day 58: Was able to successfully run TestMotor.py and TestCamera.py using Spyder3 IDE.  I can actually rotate the step motor by modifying sleep timeouts. Now I need to build a makeshift cage to hold external webcam and have it rotate with motor.

Day 59: Hosted two lightly attended events today and covered #tech_help channel for an hour as part of Study Jam 1.0.  Also spent some time debugging my meteorite python project due Tuesday. Stuck on the defense method, too much meteorites hitting earth.

Day 60: Submitted to Gradescope my submission for Problem 3 on particle filters. Still need to work on project 1 due tomorrow.  That's it for this 60 day challenge.  


One of the student scholars started a new initiative called #30_days_sprint for those who already completed the #60daysofudacity challenge. Since I had more time left in the 3 month scholarship, I joined that challenge as well and started logging below:

Day 1: Started with Search module now at lesson 9/21. Also started with Kinematic Bicycle Model 101 module.

Day 2: Started with Mars Glider project and it is hard.

Day 3: Was able to start testing my new function to estimate location of glider given a uniform distribution of particles. Still does not converge but I have some ideas.

Day 4: Passed 5 and failed 5 test cases of the Mars Glider project using particle filters. Here is a video of test case 1 convergence on the glider.

Day 5: Started on the navigation method for the Mars Glider project. Have to use some trigonometry and keep particles converged each time step to fly glider back to 0,0.

Day 6: Submitted research paper on deep-belief networks and particle filters to be able to distinguish test vs control lines in images used for recognizing pathogens. By combining ML as input to a small number of particles, the prediction of image segmentation between test and control lines are optimized. Paper “An Improved Particle Filter With a Novel Hybrid Proposal Distribution for Quantitative Analysis of Gold Immunochromatographic Strips” from 2019.

Day 7: Finished 11/21 lessons in Search module. Problem set 4 due on Feb 23. Struggling with the Mars Glider steering method, test cases pass with timeout at 20 seconds but have to be in 10 seconds unfortunately. Not sure what else to optimize. https://youtu.be/_v0lm72f3cg 

Day 8: Finished all lessons in Search module and Problem set 4 lessons. Working on the stochastic motion dynamic programming quiz for that module. Had our weekly study group meetup this afternoon followed by my bi-monthly mentee meetup to update resumes and cover letters.

Day 9: Submitted the stochastic motion dynamic programming quiz so I can now focus on project. Really struggling with the Mars Glider project because the Gradescope virtual machine does not have enough compute compared to my local computer and it times out in 10 seconds so some of the test cases fail. Here is my navigation method recorded during our office hours https://youtu.be/BFTGA6YH2C0

Day 10: Finally got some decent cases passing on Gradescope after tweaking and tuning many hyperparameters. I might leave 98% alone and go back to my face tracking hardware project.

Day 11: Powered up Rpi with uln2003 motor HAT board to test kalmanfilters.py code. Now I have to plug in formulas.

Day 12: Trying to figure out how to incorporate RobotTracker.py with both web and motor py files. Also looking at building a contraption to affix webcam onto step motor somehow.

Day 13: Working on tuning kalman filter measurement update, wondering if I need to use velocity and acceleration in F state.  Also need to submit particle filter research today.

Day 14: Submitted research paper on the Perseverance Mars rover about Search and Motion Planning. I found an article about the improvements ML models could provide to the Approximate Clearance Evaluation (ACE) evaluation algorithm used by the rover to find an optimized and safe route to take. The improvements reduces the compute needed to run the algorithm so those savings can be used elsewhere.

Day 15: Completed Proportional-Integral-Derivative (PID) Control module. Next up is problem set 5 then Rocket PID mini-project which doesn’t look mini to me with 3 parts to it. I feel like I’m 2+ weeks behind some very smart cookies.

Day 16:  Finished Problem set 5 lesson and corresponding python assignment. Due next week so I still have time to tweak. Started with Rocket PID mini-project part A which focuses on using PD controller to adjust the rocket pump’s output in order to meet pressure demands and not crash.

Day 17:  Completed part A of Rocket PID mini-project. That wasn’t too bad, just needed the correct PD controller formula. Working on part B now.

Day 18:  Completed part B of Rocket PID mini-project. Passed 4 out of 7 test cases so far. Still working to tweak the correct tau values as the lines look jagged and are oscillating.

Day 19: Working on part C of the Rocket PID project. Not sure how to use two PID controllers to return oxidizer and fuel throttles, two separate outputs, to control rocket’s flight and landing.

Day 20: Reviewing forms to see what I can do for Study Jam 2.0.

Day 21: Finished part C of Rocket PID project and submitted to Gradescope. Currently only getting 79/100 so this still needs tons of tweaking.

Day 22: Started to look at the next project due in April on Warehouse search.

Day 23: Currently getting 97/100 on gradescope for the PID controller project which does not match what I get when run locally. Need to start looking at problem set 5 due next week now and may have to come back to PID later.

Day 24: Submitted Problem set 5 to Gradescope. Went back to PID controller and still no luck. Need to do more reading to thoroughly understand what to tune as I'm just blindly stabbing in the dark here.

Day 25: Updated my CS 7638 course porfolio slide deck for Saturday. Haven't started on Warehouse search project yet, getting nervous. Tweaked PID Controller project and am now at 99/100 on Gradescope. Why can't I leave this alone?

Day 26: Started with part A of Warehouse search project using A*. Also preparing for research paper on PID controller due Monday.

Day 27: Hosted this event “Course review of AI for Robotics class” for Study Jam 2.0 earlier today. Finished Bingo Card as part of #BingoChallenge and continued with Warehouse search project.

Day 28: Completed research on new PID controller algorithm using modified NNA. Will submit it later today.

Day 29: Submitted PID controller research paper. Passed 2 tests cases so far with Warehouse search. Do not know yet how to come up with the heuristics matrix.

Day 30: I am now passing 7/10 test cases in the first part of my A* project. I might start on the second part now. This is my last day posting in this channel so I’ll head over to the groups new LinkedIn Group, new Slack channel Group, new Facebook Group, and new Discord Group. Where to begin? 




Fast forward to July 2021, I was selected into phase 2 scholarship and have submitted all 3 projects for grading. I even submitted all the career services projects including Resume review, LinkedIn review, Cover-letter review, and GitHub review.  Make sure you don't click "Graduate" yet until you are ready because you will be kicked out of classroom chats so make sure you have said your farewells there in advance.







Comments

Popular posts from this blog

Unable to get to computer BIOS because monitor goes to sleep at startup?

I just enrolled in a course that requires running virtual machines on my Windows 10 Pro computer using VirtualBox. I couldn't start the virtual machines because of this error "The native API dll was not found (C:\WINDOWS\system32\WinHvPlatform.dll)". After doing some troubleshooting, I found this to mean that the hardware acceleration settings required by the CPU to support virtualization are currently disabled in my computer BIOS. I had to enable virtualization on my CPU chipset (VT-x/AMD-V).  Seems easy enough, right? I rebooted my computer and was expecting to see the memory counter and options to get into the BIOS. It must have been so quick that it took me straight to Windows 10 login prompt right away. I tried again and as usual I am immediately back to Windows 10 login. Each time I reboot, there is a box that says my monitor is going to sleep and does not wake again until presented with Windows 10 login prompt. I can hear the disk and fans spinning during startup,

One of the more underappreciated AWS service

The most under-appreciated AWS service is the AWS Certificate Manager (ACM). This service provides SSL/TLS certificate for your custom domain as long as you subscribe to any ACM-integrated service like Elastic Cache or Cloudfront.  I had been using Wordpress to host my website https://aminsolutions.com for some time on a free web hosting provider. In order to provide SSL/TLS web encryption, I would have to buy a public certificate from an SSL provider and have that in front of my Wordpress content management web site. I found there were many limitations with that including installation of a public certificate on a free webhost subscription.  I would have had to start a paid web host subscription and in order to install a paid public certificate on a Wordpress website that wasn't getting a lot of hits. Regardless, it had to be secured via SSL web encryption so this is where AWS Cloudfront came in. Using AWS Cloudfront integrated with ACM, I can get a free public SSL web certificate