Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assignment 1, 2A, 2B completed. Added files to respective folders. #35

Open
wants to merge 37 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
2ece512
Create 1a&4.ipynb
Rodger2041 Jul 8, 2023
6c16a58
Add files via upload
Rodger2041 Jul 8, 2023
03f51ee
Add files via upload
Rodger2041 Jul 8, 2023
c48cbdf
Add files via upload
Rodger2041 Jul 8, 2023
3747fd6
Create Folder
Rodger2041 Jul 8, 2023
9f18bef
Added all the files for Assignment 2A
Rodger2041 Jul 8, 2023
418b117
Fixed Markdown file
Rodger2041 Jul 8, 2023
afdeb0c
Deleted unnecessary files
Rodger2041 Jul 8, 2023
db3b460
Create Assignment1.ipynb
Rodger2041 Jul 8, 2023
cf62516
Add files via upload
Rodger2041 Jul 8, 2023
3ac6aa3
Delete Assignment1.ipynb
Rodger2041 Jul 8, 2023
c2026e6
Create Directories
Rodger2041 Jul 8, 2023
9cecebf
Added files
Rodger2041 Jul 8, 2023
cd1d731
Delete a
Rodger2041 Jul 8, 2023
11e1f74
Create __init__.py
Rodger2041 Jul 8, 2023
6e3045b
Add files via upload
Rodger2041 Jul 8, 2023
032fa39
Create s
Rodger2041 Jul 8, 2023
dcce4db
Add files via upload
Rodger2041 Jul 8, 2023
6938761
Add files via upload
Rodger2041 Jul 8, 2023
793d4d7
Delete s
Rodger2041 Jul 8, 2023
768e065
Create a
Rodger2041 Jul 8, 2023
bf28a91
Delete a
Rodger2041 Jul 8, 2023
f39404b
Create a
Rodger2041 Jul 8, 2023
8f9146b
Add files via upload
Rodger2041 Jul 8, 2023
1404281
Delete a
Rodger2041 Jul 8, 2023
9a0b7b1
Add files via upload
Rodger2041 Jul 8, 2023
b0ce7af
Create a
Rodger2041 Jul 8, 2023
124aec1
Add files via upload
Rodger2041 Jul 8, 2023
920153f
Delete a
Rodger2041 Jul 8, 2023
902e8ec
Create a
Rodger2041 Jul 8, 2023
c4b86d4
Add files via upload
Rodger2041 Jul 8, 2023
dc733e9
Delete a
Rodger2041 Jul 8, 2023
4a29564
Add files via upload
Rodger2041 Jul 8, 2023
86cdd1f
Create a
Rodger2041 Jul 8, 2023
dc0793b
Add files via upload
Rodger2041 Jul 8, 2023
4a99db7
Delete a
Rodger2041 Jul 8, 2023
46c4d12
Merge pull request #1 from Rodger2041/Rodger2041-patch-1
Rodger2041 Jul 8, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 35 additions & 0 deletions assMath/ass2B/PranjalGautam/1a&4.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1a\n",
"When we talk about $p(y|x)$, we need to find some estimate of y given the different values of x. We can achieve this by using the mean of $p(y|x)$ (the expectation of y given x) to find out the most likely value of y given the input x. It acts as a n unbiased and consistent estimator of the true mean."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4\n",
"### a.\n",
"When we have a lot of data points to work with and the number of predictors is small, a flexible statistical learning method is expected to perform better than an inflexible one. As the number of data points is large, it can easily find the true pattern and any noise can easily be offset by the number of samples. However, it might overfit the trainset and fail to provide accurate predictions on new data. We expect the model to have low bias (high if the model was overfit to the training set) and rvery low variance.\n",
"### b.\n",
"When the number of points are small, it generally becomes harder for flexible learning models to provide reliable results, they might not be able to estimate the true values and might overfit to the data very quickly. It becomes very hard to estimate the true values because the model will be much more prone to noise in the data due to the high number of predictors. We expect the model to have high bias and low variance.\n",
"### c.\n",
"If the relationship between the predictors and response variable is highly non linear and complex, we expect a flexible model to work better than an inflexible one. Flexible models are generally better at recogonising more complex relationships, and can use approximation to get a decent estimate of the response. Considering the biar variance tradeoff, we expect the model to be able to find the patterns and thus have a relatively low bias (again if we assume the model is not fitting the training set) however the variance in the predictions might be higher due to the complex nature of the model."
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
170 changes: 170 additions & 0 deletions assMath/ass2B/PranjalGautam/1b.ipynb

Large diffs are not rendered by default.

141 changes: 141 additions & 0 deletions assMath/ass2B/PranjalGautam/1c.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"list2=np.random.exponential(20,100)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"list1=np.random.normal(0,0.5,100)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"list1+=list2\n",
"list1.sort()\n",
"# list1 is our final database where x follows some exponential distribution E(theta)\n",
"# we will estimate theta using gradient descent"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"# y=(1/b)e^(-x/b), take 1/b as w,\n",
"#y=prod(we^(-wxi))\n",
"#log(y)=sum(log(w)-wxi)\n",
"#dlog(y)/dw=n/w-sum(xi)\n",
"#we use this as the loss function for gradient descent\n",
"#Now we define the function\n",
"def gradient_descent(x,w,L):\n",
" grad_w=-len(x)/w\n",
" for i in x:\n",
" grad_w+=i\n",
" if(grad_w<0):\n",
" L*=10\n",
" return max(w-grad_w*L,0.0001)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"w=1 #initial value\n",
"L=0.00001 #learning rate\n",
"epochs=100000"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"for i in range(epochs):\n",
" w=gradient_descent(list1,w,L)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.05008954922152509"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"w"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Here is a list of theta and w(predicted value of 1/theta)\n",
"$\\theta\\space\\space\\space\\space\\space\\space\\space\\space\\space\\space w$\\\n",
"$10\\space\\space\\space\\space\\space\\space\\space 0.09462341857288536$\\\n",
"$0.1 \\space\\space\\space\\space\\space 9.130099007007797$\\\n",
"$0.2\\space\\space\\space\\space\\space 4.922704924280886$\\\n",
"$5\\space\\space\\space\\space\\space\\space\\space\\space 0.19044194086134064$\\\n",
"for the final test using a high theta and some noise\\\n",
"$20\\space\\space\\space\\space\\space 0.045793632194344173$"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.1"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
182 changes: 182 additions & 0 deletions assMath/ass2B/PranjalGautam/2.ipynb

Large diffs are not rendered by default.

258 changes: 258 additions & 0 deletions assMath/ass2B/PranjalGautam/3.ipynb

Large diffs are not rendered by default.

Loading