# CUNY Borough of Manhattan Com

Here is a set of RDS files that contain sf objects of state county boundaries. We are going to work with these using iteration and functions for some of this week’s work.

1. Let’s warm up with some SF practice. the function `readRDS()` reads in RDS files. The dplyr function `bind_rows()` can take rows of a data frame, tibble, of sf object, and bind them together properly. Using the `purrr` library, read in all of the counties files and then combine them into a single data frame. Plot the result.
2. This is great. Now, I’m curious – is there a link between the number of counties in a state and the ratio of area of the largest county in the state to the total state area? Let’s find out!

A. Write a function that, given a state name, will use `readRDS` to read in a single data file and fix up the CRS (these are all in lat/long – you want a mollweide, in which distance is in meters). Plot Massachusetts to make sure everything works.

B. Write a function that, given an sf object of a single state and its counties, will return a one row data frame with the number of counties, the area of the largest county, the average county area, the state’s area, and the ratio of the largest county to total area. `st_area()` will help you calculate area – but you will need to `as.numeric()`, and if you take an sf object and use `summarize()` on it, it will merge all of the polygons into one.

C. Using iteration, make a data frame that has all of the above information for all of the states. +1 EXTRA CREDIT – have a column named state with the state name. (hint: `?setNames`)

D. Plot that largest county ratio to number of counties! What do you learn? +1 extra credit for each exploration beyond this.

1. Install and load up the package `repurrsive`. It has an object in it, `got_chars` with information about the characters from the Game of Thrones series. Notice it is a list of lists. To explore it, check out `listviewer::jsonedit(got_chars, mode = "view")`.

Now, using `purrr` functions make a tibble with the following columns:

• name
• aliases (a list column)
• gender
• culture
• allegiances (a list column)
1. Who has more aliases on average? Men or women? Visualize however you see fit.
2. One thing that is cool about list columns is that we can filter on them. We can remove rows with list columns that have a length of 0 with `filter(lengths(x) < 0)` where x is some column name. Note we are using `lengths()` and not `length()`.

Another cool thing is that we can always `tidyr::unnest()` columns to expand them out, repeating, say, names or other elements of a data frame.

A. Select just name and aliases. Filter the resulting data down to something usable, and then unnest aliases. Use the resulting data to determine, who had the most aliases!

B. Great! Now. Let’s use this idea of unnesting to build and then visualize a dataset that shows the breakdown, within each allegiance, whether there are more aliases for men or women. What does this visualization teach you about the different allegiances?

E.C. +8 Write a function that takes a state name, and plots the state, but with height of county as % area using deckglor mapdeck

# CUNY Borough of Manhattan Com

Saying Goodbye: Reflections on Termination

Due: May 3

Termination is one of the most important parts of treatment for both the client and the human services intern. Oftentimes students do not realize the powerful and positive impact they have had in the lives of the clients they have worked with at their internships. Even if you don’t feel that you have made a connection to a particular client, the fact that you were a familiar face to them who may no longer be in their lives can bring up powerful emotions. Likewise, termination can bring up an array of feelings for the human services intern including feelings of loss, pride in one’s accomplishments, relief, or reminders of prior separations, among others. For this reason, it is important to reflect on your feelings and thoughts and to make every effort to have a healthy and positive termination with clients and co-workers at your field placement.

Write a 3-4- page paper discussing the following:

1) Begin by summarizing what you have learned about termination from the assigned reading in the text (“Saying Goodbye” by Danowski)

2) How and when did you address termination with your clients? What was their response?

3) How do you feel about termination in relation to your clients? What do you think your client’s responses are about you leaving?

4) What are your thoughts and feelings about termination with your supervisor? What have you learned from them?

5) What do you think is the most important contribution you have made to your field placement? You can discuss contributions you have made in terms of how you have impacted a client’s life or general skills you brought to the agency.

6) Briefly discuss your plans for the future including your career and educational goals. What skills did you develop at your internship that helped prepare you to meet your future goals?

# CUNY Borough of Manhattan Com

Part I

asks you to create a journal of every time you use any social networking service or site (including Snapchat, Tik

Check in every hour to record what kinds of social media you have been using during that hour. I would recommend using a spreadsheet or a table for each on your word file, though you can write on a piece of paper and take a picture as well.

You will need to submit this log.
At the end of the 8 hour period write a 2 page paper about the trends in your social media usage.

FOR YOUR JOURNAL, YOU ARE ASKED TO HAVE AN ENTRY FOR EACH OF THE 8 HOURS.

Part II

asks you to stay off social media (Facebook, Twitter, Instagram, Tik Tok, Snapchat, etc.) for as long as you can

(up to 8 hours). IF YOU DECIDE, YOU CAN ALSO STAY OFF ALL TECHNOLOGY FOR AS LONG AS YOU CAN.

During this period please keep a journal documenting your feelings after every hour.

After this 8-hour period, please write a 2-3 page double-spaced reflection on your experience. If you stop before 8 hours, please indicate what made you finally stop.

For this part, please try to give up ALL technology usage for as long as you can (including cellular phone, computer, etc.).

In short, you will be asked to submit four things:

1. A list or table for all social media platforms used in a 8-hour period for Part I (worth 5 points);

2. A 2-3 page reflection on trends in social media platform usage during the period in Part I (worth 10 points);

3. a journal documenting every hour you did not use social media or technology in Part II (worth 5 points);

and a final reflection paper on your experience in Part II

# CUNY Borough of Manhattan Com

Instructions: A case brief is short summary of a legal opinion. It contains a written summary of the basic components of that decision. It is a method of studying case law. It helps students identify the key points of a legal opinion. Most case briefs contain similar information, but the headings and their sequence may be different. For this assignment you are required to follow the general format as set forth below:

1. Case Name: Include the full citation and including the date of the opinion. The citation of the case is usually next to the case name in the legal opinion.
2. Procedural History: The procedural history is the disposition of the case in the lower court(s) that explains how the case got to the court whose opinion you are reading. The procedural history must include: (1) The lower court who heard the decision; and (2) Who appealed that decision.
3. Statement of Facts: Include only the facts that were relevant to the court’ decision. You are unlikely to know what these are until you have read the entire opinion. Many cases may include procedural facts that are relevant to the decision in addition to the facts that happened before litigation.
1. Issue: The question the court had to decide in this case. It usually includes specific facts as well as a legal question. It may be expressed or implied in the decision. Cases may have more than one issue.
2. Holding/Decision: The legal answer to the issue. If the issue is clearly written, then the holding can be expressed a “yes” or “no.”
3. Rule of Law: The general legal principles relevant to the particular factual situation presented in the case.
1. Reasoning: The logical steps the court takes to arrive at the holding. This is the court’s analysis of the issues and the heart of the case brief. It can be straightforward and obvious, or you may have to extrapolate it from the holding. The reasoning states why the court made that decision. It should be the longest section in the brief.
2. Judgment/Disposition: The judgment is the court’s final decision as to the rights of the parties, the court’s response to a party’s request or relief. Generally, the appellate court will either affirm, reverse, or reverse with instructions. The judgment is usually found at the end of the opinion.

Please refer to your resources including the grading rubric, the PowerPoint presentation on how to case brief, and the simple case brief provided for your reference. If you have any questions about the assignment, do not hesitate to reach to me.

THIS is the first case brief: https://supreme.justia.com/cases/federal/us/384/43…

THIS is the second case brief: https://supreme.justia.com/cases/federal/us/372/33…

THIS is the third case brief: https://supreme.justia.com/cases/federal/us/559/35…

# CUNY Borough of Manhattan Com

I’m working on a biology question and need an explanation and answer to help me learn.

Please write this up as using Rmarkdown. Make sure everything runs. Answer questions in text. Comment with abandon.

1. Create a vector of 100 randomly distributed numbers between 0 and 100 using `runif` and save the vector into the variable `my_vec`. What information does `str` and `summary`tell you about `my_vec`? How do they differ?
2. Load the `readxl` and `readr`libraries. They are part of tidyverse and you should have them. If not, `install.packages()` is your friend! Then, load the following data files: https://biol355.github.io/Data/my_data.csv using `read.csv` and `read_csv` and https://biol355.github.io/Data/my_data.xlsx using `read_excel`. Looking at the three objects you loaded in, what are the any differences or similarities between them?
3. What does the output of `str`, `summary`, `skimr::skim()`, and `visdat::vis_dat` tell you about the data you loaded? What is different or the same?
4. Add a column to the mtcars data called `Model` which uses the row names of mtcars (`rownames(mtcars)`) as its values. Show me the head of the data frame to see if it’s been done correctly. Note, to add a column to a data frame, we can specify `yourdf\$new_col_name <- new_vector_we_we_are_adding`(note, that’s pseudo-code). Note how we are using the `\$` notation to add a new column.
5. Let’s use the `bind_rows` function in dplyr, as it’s pretty powerful. Let’s say you want to add a new row to mtcars for a new model. Make a new data frame with the following columns: Model = Fizzywig, mpg=31.415, awesomness=11. Now try to make a new data frame where you `rbind``mtcars` and this new data frame. What happens? Don’t do this in a markdown code chunk – just try it, and then report what happens. It might or might not go as planned (and Rmarkdown can choke unless you add the appropriate argument to the code chunk – more on that soon)! Then, make a new data frame here you use `dplyr::bind_rows` to combine them. Examine the resulting data frame. What do you see? You can try this in a code chunk for your markdown. How do the two methods differ? Look at their help files for some information that might help you.

# Function lab exercises

Function template:

``````func_name <- function(arg1, arg2, ...) {

func_code_here

return(object_to_return)

}``````

In this template:

• function_name is what you decide to call your function. This is usually a verb that describes what the function does; e.g., ‘get_max_diff’, ‘get_first_year’, …
• arg1 this is the name of an argument (again you decide what the name is). This is what you will call the input when you are within the body of the function code
• function_code_here is where you write the code. This is where you transform your inputs into the output

Remember that a function takes input (which could be multiple things), does something to that input, and then returns some kind of output.

### Exercises

1. This may be a type of function you are more familiar with. It is an equation that converts Celsius to Farenheit. A previous student of mine was basically Farenheit-illiterate; she never know what the weather is going to be like. Given this equation, can you write a function that converts a temperature value in Farenheit to Celsius for her?
• C = (F – 32) x 5/9

Take your function for a spin, does it return the correct values?

• 32 F = 0 C
• 50 F = 10 C
• 61 F = 16.11 C
• 212 F = 100 C
• -40 F = -40 C

2a. Given the following code chunk for reading buoy data files in for each year, describe the following:

• What parts of your code are consistent across every line/code chunk?
• What parts are different?
• What is the output that you want your function to return?
``````buoy_1987 <- read_csv('./data/buoydata/44013_1987.csv', na = c("99", "999"))
buoy_1988 <- read_csv('./data/buoydata/44013_1988.csv', na = c("99", "999"))
buoy_1989 <- read_csv('./data/buoydata/44013_1989.csv', na = c("99", "999"))
buoy_1990 <- read_csv('./data/buoydata/44013_1990.csv', na = c("99", "999"))``````

2b. Use the str_c() function to write a function that creates the filename for each year. I’ve given you an example below if we were using str_c for just 1986. Consider this your starting point to build out a function.

``str_c("./data/buoydata/44013_", 1986, ".csv", sep = "")``
``## [1] "./data/buoydata/44013_1986.csv"``

Extra credit (2 points): Check out the glue package and do the same thing with `glue()`.

2c. Complete the skeleton of this function based on the work that you have done up to now. Describe, in words, what is happening in every step.

``````read_buoy <- function(_________){

filename <- ___________________________

return(___________)

}``````

2d. Amend the read_buoy function to allow for a variable buoy number (currently we are using data from buoy 44013, but there are many other numbers/names that could be used!), directory location of the file, and year.

2e. Apply the workflow that you used in 2a – 2c to create a function to clean up the data using a dplyr workflow that will work for 1987, 2000, and 2007 Have it generate daily averaged wave heights and temperatures as well as renaming all of the columns to something understandable. Begin by writing a dplyr workflow for one data frame at a time. Then generalize it. Remember to ask yourself the following questions:

• What parts of your code are consistent across every line/code chunk?
• What parts are different?
• What is the output that you want your function to return?

If you are not sure of some of these things, remember to run the code chunks bit by bit, putting in test values (e.g., one year of data) to ensure that you know what you are working with, what each line is doing, and what the final returned value is. Your answer might look similar to what we did in class, or, very different depending on how you write the function.

### Modular Programming

3A-C. Using all that we previously created in the functions week and/or this homework, create a set of functions that, once a buoy is read in, returns a two facet ggplot2 object of a histogram of the difference between wind speed (WSPD) and gust speed (GST) and between the air temperature (ATMP) and water temperature (WTMP), so that you can later format and style it as you’d like. E.C.(+1 per question) Break the templates below into smaller modular functions.

``````gust_increase_hist <- function(a_year){
#get the cleaned buoy data

#create a long data frame with each row as a data point, measuring either
#difference between air and water OR wind speed and gust speed - one row per measurement
#with a column that says WHAT that measurement is

#create a plot

}

buoy_measured_diff_long <- function(a_buoy){
#with one buoy

#calculate differences between ATMP and WTMP as well as WSPD and GST

#pivot to make it long

#return the modified data
}

plot_dual_hist <- function(summarized_buoy){
#create a ggplot with a single variable as the x

#make a histogram

#facet by the measurement type
}

#test it out!``````

### Final Project Prep

1. Based on the data set you’re planning to use for your final, do you need to write any functions to clean the data as you bring it in? If so, describe it, and take a stab at writing it. If not, show us that the data loads cleanly.
2. With the data you just loaded, make one visualization. But, before you do, articulate a question you want to answer with said visualization. What do you think you will see? Now make the plot. Did you see what you expected? What did the data tell you?

# CUNY Borough of Manhattan Com

### INTRO

For this week’s homework, let’s work on mapping the covid-19 data. You have two choices of data source. The first is the coronavirus data we have already loaded.

``````library(coronavirus)
``````##   Province.State Country.Region      Lat     Long       date cases      type
## 1                         Japan 35.67620 139.6503 2020-01-22     2 confirmed
## 2                   South Korea 37.56650 126.9780 2020-01-22     1 confirmed
## 3                      Thailand 13.75630 100.5018 2020-01-22     2 confirmed
## 4          Anhui Mainland China 31.82571 117.2264 2020-01-22     1 confirmed
## 5        Beijing Mainland China 40.18238 116.4142 2020-01-22    14 confirmed
## 6      Chongqing Mainland China 30.05718 107.8740 2020-01-22     6 confirmed``````

The second is a newer dataset. It harvests data that is from the New York Times. It is focused solely on the US. To install it, you’ll need to do the following

``````#if you don't have it already
install.packages("devtools")

#install the library from github
devtools::install_github("covid19R/covid19nytimes")``````
``````library(covid19nytimes)
covid_states <- refresh_covid19nytimes_states()

``````## # A tibble: 6 x 7
##   date       location location_type location_standa… location_standa… data_type
##   <date>     <chr>    <chr>         <chr>            <chr>            <chr>
## 1 2020-01-21 Washing… state         53               fips_code        cases_to…
## 2 2020-01-21 Washing… state         53               fips_code        deaths_t…
## 3 2020-01-22 Washing… state         53               fips_code        cases_to…
## 4 2020-01-22 Washing… state         53               fips_code        deaths_t…
## 5 2020-01-23 Washing… state         53               fips_code        cases_to…
## 6 2020-01-23 Washing… state         53               fips_code        deaths_t…
## # … with 1 more variable: value <dbl>``````
``````covid_counties <- refresh_covid19nytimes_counties()

``````## # A tibble: 6 x 7
##   date       location location_type location_standa… location_standa… data_type
##   <date>     <chr>    <chr>         <chr>            <chr>            <chr>
## 1 2020-01-21 Snohomi… county_state  53061            fips_code        cases_to…
## 2 2020-01-21 Snohomi… county_state  53061            fips_code        deaths_t…
## 3 2020-01-22 Snohomi… county_state  53061            fips_code        cases_to…
## 4 2020-01-22 Snohomi… county_state  53061            fips_code        deaths_t…
## 5 2020-01-23 Snohomi… county_state  53061            fips_code        cases_to…
## 6 2020-01-23 Snohomi… county_state  53061            fips_code        deaths_t…
## # … with 1 more variable: value <dbl>``````

Now, you have three data sets to choose from! Countries, states, or counties. Remember, with the coronavirus data, you have to do some dplyr::summarizing to get it down to countries, though!

## Maps to use for this assignment

OK, so, we need world, US state, and US county maps – depending on which of the three datasets you chose

``library(sf)``
``## Linking to GEOS 3.7.2, GDAL 2.4.2, PROJ 5.2.0``
``````#The world
library(rnaturalearth)
world_map <- ne_countries()

#US States
library(USAboundaries)
us_states <- us_states()

#US Counties
us_counties <- us_counties()``````

Armed with this, let’s make some maps!

### QUESTIONS

1. Which data set – or aspect of a single data set, are you most interested in? Sort through the datasets. What is there? Is it the world? A single country? Multiple contries? All states? Counties in one state?

Filter or summarize your data to just what you are interested in, in terms of space.

For example

``library(dplyr)``
``````##
## Attaching package: 'dplyr'``````
``````## The following objects are masked from 'package:stats':
##
##     filter, lag``````
``````## The following objects are masked from 'package:base':
##
##     intersect, setdiff, setequal, union``````
``````florida_covid <- covid_counties %>%
filter(stringr::str_detect(location, "[Ff]lorida"))

florida_map <- us_counties %>%
filter(state_name == "Florida")``````
1. What type or types of data from that dataset are you interested in? Why? Filter the dataset to that data type only.
2. What do you want to learn from this slice of the data? Formulate a question and write it out here.
3. Filter and manipulate the data so that it is in a format to be used to answer the question.
4. Join the covid data with spatial data to build a map.
5. Create a map from this data! Make it awesome!
6. What do you learn from the map you made?
7. This static map is, I’m sure, great. Load up `tmap`and make it dynamic! Is there anything different you can learn from this form of visualization?