I will graduate from the University of British Columbia in November 2023 with a major in Computer Science, and a minor in Commerce.
I'm passionate about Software Development, Web Development, and UI/UX Design.
At my most recent internship, I was an SDE at Amazon, where I was tasked with two major milestones. The goal of the first milestone was to improve the effiency of a service owned by my team; the second milestone was to design and implement a high-requested feature. Also, I am currently the coding manager at the UBC Visual Cognition Lab.
In my spare time you can catch me playing squash, soccer, spikeball, basketball... I love all sports. I also really enjoy listening to music, especially jazz, hiphop, rock etc. I love taking photos as well.
It’s COVID, and at the UBC campus gyms, there was a capacity limit of 25 people. You were required to book a session using an online software, which was problematic - bookings for a session would open 48 hours in advance at noon. The entire campus would be scrambling to make bookings, resulting in the entire day being booked out within one minute. If you weren’t fast enough or in class, there would be no workout for that day :(. Clearly, there needed to be a solution.
I built a simple python web scraper that logs into the online booking software and sends you a list of open bookings through SMS (using the Twilio API), due to someone cancelling their booking. I deliberately chose SMS because it’s a high-priority interface - the likelihood of someone checking their phone and seeing a text message with available openings is higher than say, an app or email.
I liked this project because I gave it to my close friends to use, and they actually found it to be useful. This was my first experience building something that had real users!
This project was built at NwHacks 2020. It was inspired by the many hateful and rude comments that can be found on platforms such as YouTube, Twitter, Reddit, etc., as well as the idea that it might even be you yourself who is making such comments without knowing it. The motivation for this project was that if the user could be warned that their comment is potentially harmful or toxic, then they would be less inclined to actually post it.
We chose to implement this project in the form of a Chrome extension so that we could easily support a variety of websites. The extension takes the text from the textbox that the user is typing in and feeds it to a Node.js instance, which is then fed to Google’s Perspective API. This API returns a float value from 0.00 - 1.00, corresponding to the “toxicity” of the text it was given. Sentences that included swears and hateful comments return a higher value.
We then decided a threshold value where any score for a given sentence greater than this threshold is considered too toxic for the sake of the Internet. In this case, the user is alerted with a popup message.